Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - ThomasVD

Pages: [1] 2
1
General / Texture generation stuck at 49%
« on: February 18, 2019, 06:41:23 PM »
Hey everyone,

I've encountered an issue I've never had before; my texture generation is stuck at 49% (parameterizing texture atlas).

I thought it might be an issue with the mesh geometry, so I tried first decimating the mesh to 1M polygons in ZBrush, that didn't help. Then I tried "repairing" the reduced mesh in ZBrush, Meshlab, MeshMixer, Rhino, 3D Builder and ReMESH but no matter which software I run "fix mesh" through, the mesh I import back into Metashape won't get past 49% texturing.

Any ideas?

2
General / Bundler .out images mixed around
« on: June 21, 2017, 01:41:47 PM »
Hey everyone,

I'm trying to export my PhotoScan alignment / sparse point cloud as a Bundler (.out) file.

The procedure I use in PhotoScan is as follows:
- Make sure all cameras are aligned, remove unaligned cameras if necessary
- Tools => export => cameras => export as Bundler (.out) file
- Tools => export => undistort photos => filename template: {filenum}.{fileext} => save in same folder as .out bundler file

Sometimes this works great, other times it (partially) fails. On failed attempts, when I open the Bundler .out file in other software the sparse point cloud and camera positions are correctly shown, however the images have been mixed around (ie: image A is not where image A is supposed to be, but is in the position of for instance image D).

I think this is because the image numbers as exported in the .out file don't match the image numbers as produced by PhotoScan's {filenum} command, so the images get shuffled around. Is there any way to make sure the Bundler .out image numbers match the PhotoScan image numbers?

Maybe these ideas could help? 
- Instead of giving my photos names like "Part1_0000", "Part1b_0001", just save them all as a number format "0000", "0001", etc?
- Or go into PhotoScan's "details" photo view and sort the images based on a parameter other than image name?

Any other suggestions? Does anybody have an idea how the Bundler .out file format defines what should be image 0, 1, 2, ..?

Thanks in advance for any input!
Cheers,

Thomas

3
General / Technical: Optimisation of Merged Chunks
« on: March 13, 2017, 07:38:03 PM »
Dear Pros and Developers!

I have a technical question which I have not seen discussed here but which I feel can be very important to more advanced projects. 

In the optimisation of a 'normal' (not merged) chunk, PhotoScan uses whatever constraints it has (tie points, known distances, known coordinates, pre-calibrated camera parameters, ...) to optimize the sparse point cloud, camera position / orientation and camera calibration. The 'weight' of each of these constraints can be influenced by adjusting the Reference Settings pane. By removing certain constraints (such as bad tie points) the alignment result is optimised (a new "best-fit" model is calculated by bundle adjustment) and thereby the overall accuracy is hopefully increased.
In the case of a single chunk, all of these constraints are "linked", since all images have been compared and the tie points (generally the most important constraint in modern photogrammetry) form a single interlinked whole.

So far so good.

My question is: How does optimisation in a merged chunk work, and is this different depending on which chunk alignment method was used?

  • With point based chunk alignment the tie points of both chunks are compared to one another again (time consuming). If two chunks that have been aligned based on point based chunk alignment are merged, are matches between images from different chunks considered as valid matches between those images during the optimisation of the merged chunk?
  • With marker based chunk alignment the tie points from both chunks have not been compared to each other, so presumably after merging both chunks, there are no tie points between images from the different chunks, so these can't be taken into account as valid matches between images during optimisation? Are the two merged chunks then essentially optimised independently from each other (only taking non-matching feature-constraints into account for the optimisation as a whole)? Are the markers that were used to align the models considered "valid matches" between the images from both datasets (or is this perhaps only the case if you choose "merge markers" option during chunk merging?)
  • With camera based chunk alignment are the tie points of matching cameras merged? As an example: chunk 1 has valid matches between photo A and photo B, whereas chunk 2 has valid matches between photo B and photo C (photo A is not included in chunk 2). If I then perform camera based chunk alignment based on overlapping photo B, will the matches of photo B be "merged" in the merged chunk? Do I now have a single photo B which has matches in both photo A (from chunk 1) and photo C (from chunk 2). In this case the camera based chunk alignment would allow to again create a single inter-linked network of tie points, but I think this is not the case in PhotoScan since after merging chunks from camera-based chunk alignment the duplicate photos are also duplicated in the merged chunk. Is there a possibility to "merge" these camera positions into a single picture with tie points stretching across photos A, B and C?
If anyone has answers to these questions, or simply wants to contribute their thoughts on this matter, I think that'd be really useful!

It has important implications for questions such as: Which chunk alignment method allows for best optimisation? When precisely would be best to perform optimisation (before merging, after merging, ..)? Is good optimisation only possible if all images are aligned together (not working with chunks)?

Cheers!

4
General / Ultra wide lens choice
« on: November 16, 2016, 07:24:33 PM »
Hi everyone,

I'm looking to buy an ultra wide lens for my Nikon D5300 (DX) camera, to use exclusively for photogrammetry (especially in scenarios with limited space). I understand that PhotoScan's algorithms assume a 50mm equivalent and Brown's calibration model, but I guess that as long as the lens doesn't create too much image distortion the software should have no problem aligning and reconstructing a scene?

I spent a lot of time Googling and have narrowed down my choices to these 4 options:
- Tokina 11-16mm - this one seems to be a customer favourite on online forums but gets slightly higher distortion
- Tokina 11-20mm - this one gets the best DxO rating with my camera
- Tokina 16-28mm - according to DxO this one has the least distortion
- Sigma 10-20mm - widest angle, gets good reviews, but apparently less sharp along edges

Each gets great reviews, but of course none of those reviews take into account photogrammetry!

Which would you recommend? I'm looking for something as wide as possible which still gets good photogrammetry results. I've taken into consideration sharpness, auto-focus enabled, price, amount of distortion - am I missing anything? Is there some reason any of these would simply be a bad choice for photogrammetry?

Any feedback is as always much appreciated!

Cheers,

Tom

5
General / Software version of a specific PhotoScan model?
« on: September 27, 2016, 12:16:55 PM »
Hey everyone,

I'm wondering if there's a way of determining the original software version used to build a particular PhotoScan model?

If I open an old .psz file with the most recent version of PhotoScan and export a report of that particular model, the report will simply state the current version of PhotoScan (ie the one I used to view the old file) as the software version. Is there a way of finding out which PhotoScan version was originally used to produce the model?

Any help is much appreciated!
Cheers,

Tomm

6
General / PhotoScan + endoscope?
« on: November 07, 2015, 05:31:39 PM »
Hey everyone!

Just wondering if anyone here has ever experimented with reconstructing the insides of for instance small objects using an endoscope? It sounds like a tantalizing prospect to be able to access the insides of hard-to-reach small spots and build 3D models but I'm wondering what the user experiences are? Are these sorts of cameras super-expensive? Do they not calibrate well in PhotoScan? Are the resulting images of sufficiently high resolution?

Think this is an interesting topic to explore further :)

7
General / Organizing the Workspace?
« on: August 29, 2015, 06:56:23 PM »
I feel like this should be super easy but I can't figure out how to:
  • Reorder the position of the chunks in the Workspace menu on the left
  • Add an additional Workspace within the same PhotoScan file

I just want to get a bit more order in my PhotoScan files. Seems like you can click and drag the chunks on the menu on the left, but can't find any way to 'drop' them anywhere else.

If this isn't possible yet it'd be a small and simple feature request :)

8
General / Eurocom laptop build
« on: July 02, 2015, 01:42:55 PM »
Hi everyone!

I'm looking to buy a very powerful laptop for fieldwork photogrammetry using PhotoScan. After some online research Eurocom http://www.eurocom.com/ec/main()ec seems to provide some of the most powerful/configurable laptops on the market. One great advantage is that they can use desktop CPUs (the laptop's battery life or noise are not an issue), and that the BIOS appears to be very configurable (I want to be able to experiment with various CPU hyperthreading setttings). Of their different models I think the Panther 5 is the most appropriate for PhotoScan work.

Since I don't have extensive experience configuring computers I was wondering if any of you would like to have a go at (virtually) assembling an awesome PhotoScan-cruncher laptop: http://www.eurocom.com/ec/configure(1,224,0)ec for between €3500 and 4000? Any advice would be very much appreciated.

The most important bottleneck for the work we are carrying out is the CPU (initial photo align phase), so I was looking at the Intel Core i7-4960X, but if someone has convincing arguments for using a (much more expensive) XEON processor I'd be happy to hear them!

Thanks in advance for any input from the community :)

Tom

9
General / Intel Core i7-4700MQ vs i7-5960X - performance boost?
« on: June 22, 2015, 01:27:25 PM »
Hi everyone!

I'm looking at building a new PC for a project during which especially the "align photos" stage should go very fast - data will be coming in steadily during the day, and images should be aligned by the evening.

Since the "align photos" stage appears to depend mostly on CPU, I'm thinking of making a build with the Intel Core i7 5960X CPU, as this seems to be the best performing i7 according to the Anandtech PhotoScan benchmarks: http://www.anandtech.com/bench/CPU/1057

Currently I'm using a laptop with an Intel Core i7 4700MQ CPU (4 cores, 8 threads, up to 3.4ghz). My question is how much of a performance boost I can expect when I upgrade to a system with the i7 5960X?

Currently alignment of ca 500 images at "low" (pair preselection disabled) takes roughly 2 hours. Can I expect this time to be cut to half? To a third? Any information would be very welcome!

Cheers,

Thomas

10
General / Sharp video frame extraction software?
« on: June 17, 2015, 11:34:04 PM »
Hi everyone!

I'm currently working on a project in the Netherlands where we are recording a wreck using GoPro video footage; we extract roughly two frames per second from the footage and then align those frames using PhotoScan.

While it is working quite well, at the moment we have the issue that about half of our extracted frames are sharp, while the other half are blurry. Our workflow would be sped up significantly if we could simply get rid of the blurry images, or use frame extraction software which filters and only extracts sharp frames.

Does anyone know of a software package or workflow that could help us obtain only SHARP frames from our video footage?

Any help is much appreciated!

Tom

11
Python and Java API / Align coordinate system to bounding box
« on: January 24, 2015, 03:12:22 PM »
Hi everyone,

I have no experience whatsoever with coding, but I'm trying to adapt this scrypt so it can be used in version 1.1.0:

Code: [Select]
#rotates model bounding box in accordance of coordinate system for active chunk
#bounding box size is kept
#compatibility: Agisoft PhotoScan Professional 0.9.0

import PhotoScan
import math

doc = PhotoScan.app.document

chunk = doc.activeChunk

T = chunk.transform

v = PhotoScan.Vector( [0,0,0,1] )

v_t = T * v

v_t.size = 3

m = chunk.crs.localframe(v_t)

m = m * T

s = math.sqrt(m[0,0]*m[0,0] + m[0,1]*m[0,1] + m[0,2]*m[0,2]) #scale factor
# S = PhotoScan.Matrix( [[s, 0, 0], [0, s, 0], [0, 0, s]] ) #scale matrix

R = PhotoScan.Matrix( [[m[0,0],m[0,1],m[0,2]], [m[1,0],m[1,1],m[1,2]], [m[2,0],m[2,1],m[2,2]]])

R = R * (1. / s)

reg = chunk.region
reg.rot = R.t()
chunk.region = reg

So far I've made it into this:
Code: [Select]
import PhotoScan
import math

doc = PhotoScan.app.document
chunk = doc.chunk

T = chunk.transform.matrix

v_t = T * PhotoScan.Vector( [0,0,0,1] )
v_t.size = 3

m = chunk.crs.localframe(v_t)

m = m * T

s = math.sqrt(m[0,0]*m[0,0] + m[0,1]*m[0,1] + m[0,2]*m[0,2]) #scale factor
# S = PhotoScan.Matrix( [[s, 0, 0], [0, s, 0], [0, 0, s]] ) #scale matrix

R = PhotoScan.Matrix( [[m[0,0],m[0,1],m[0,2]], [m[1,0],m[1,1],m[1,2]], [m[2,0],m[2,1],m[2,2]]])

R = R * (1. / s)

reg = chunk.region
reg.rot = R.t()
chunk.region = reg

However I get the following error message:

line 16, in <module>
    m = chunk.crs.localframe(v_t)
AttributeError: 'NoneType' object has no attribute 'localframe'

Any tips on how to overcome this issue?

Thanks in advance,

Thomas

12
General / Merging different chunks flawlessly?
« on: January 22, 2015, 11:08:59 AM »
[just for the Agisoft PhotoScan Professional Edition]

Dearest wonderful Agisoft community,

I have a problem with a dataset in which I processed 462 pictures in 4 chunks. The original pictures were 'bad' data; old scans of even older dias - no .EXIF, no GPS data. Nonetheless, the 462 pictures eventually aligned quite well using marker-based 'guided' picture alignment. The 4 chunks were then aligned using marker-based chunk alignment, however there is a geometric error of about 1m between matching chunks.

So what I have now are: approximate camera positions (in local coordinate system), approximate camera calibration, 400+ manually placed markers used to align pictures and align chunks.

I'm trying to figure out whether I could use this data in order to reprocess all 462 pictures in one chunk (from align photos stage), but using the camera positions in order to somehow 'restrict' where the software looks for matches (because otherwise it will find incorrect matches in the poor quality pictures). The hope is to then create a coherent single-chunk model without above-mentioned geometric error.

What I have already tried: I tried merging the 4 chunks in one layer, then did 'export cameras' in PhotoScan XML format, made a new file with the same 462 pictures in one chunk, did 'import cameras'. However, after reconstructing the sparse point cloud, it seems like PhotoScan uses these imported cameras as 'absolute truth', and does not correct them anymore based on the matches it finds. As such the model contains the same geometric distortion as the 4 chunks merged.

What I already tried #2: I did 'export cameras' in Omega Phi Kappa txt format, created the file with the 462 pictures in one, and imported the camera coordinates from the Omega Phi Kappa file in the reference menu. I could then 'align pictures' using reference coordinate data as a guide. As such PhotoScan does not consider the camera coordinates as 'absolute', but just as an approximation. However, I could not correctly extract Yaw, Pitch and Roll from the Omega Phi Kappa file, and as such the pictures are getting projected on the wrong side of the coordinates.

If anyone has any input on how I might be able to set an 'error margin' on the imported camera data (first test), or how I can correctly export estimated camera orientation along with estimated coordinates (second test) or any other procedure for obtaining the desired results, please let me know.

13
General / "Optimize cameras" -> results worse than before?!
« on: January 19, 2015, 10:12:59 PM »
Hi everyone,

I'm trying to do the 'optimize cameras' step based on the known distance of scale bars, as explained on p35-36 of the Professional Edition user manual: http://www.agisoft.com/pdf/photoscan-pro_1_1_en.pdf. In my project, hundreds of scale bars cover a wreck, and each one is exactly 75cm long.

I placed markers at both ends of these scale bars, selected both markers to make a scale bar, then set each scale bar's distance to .75m. I then clicked 'update' to be able to see what the total error was, and it was quite low at 0.011389. Then I selected all scale bars and clicked 'optimize cameras'. However after optimization the total error has actually increased to 0.029667.

Shouldn't optimization find the best fit between these values and improve my results?
What can I do to actually improve results? Should I change the 'reference settings' or something?

Any help is much appreciated!

14
General / Shipwreck: marker-based chunk align
« on: September 25, 2014, 07:32:13 PM »
Hi everyone!

I'm wondering whether after performing marker-based chunk alignment the chunks can be merged and the markers then used in order to "optimize" the merged sparse point cloud (like you would if you had ground control points)?
In fact, is there any way to benefit from using the "optimize" function without having ground control points?

The project I'm working on is a shipwreck site. Due to low visibility we had to get very close to the wreck while taking pictures, which in turn meant we had to use a LOT of pictures in order to have sufficient overlap to cover the entire site. In total about 7000 pictures are used, divided into several chunks (see screenshot). These chunks have been aligned (after meshing each chunk and generating a texture on which features were then picked) using marker-based alignment. However not every chunk fits perfectly onto the next one (very slight differences in geometry which cause the markers to not align perfectly). Therefore I would like to somehow optimize the final merged model so that the geometry does align 100%.

Any advice / suggestions on workflow are very much appreciated!
Thank you,

Thomas

15
General / 3D GIS software?
« on: September 01, 2014, 06:42:02 PM »
Hey everyone,

I'm looking for a workflow where I can export a PhotoScan model (not orthophoto or DEM) and view it in a 3D GIS software package.
Does anyone have experience with this or any suggestions where to begin looking?

Thank you sincerely,

Thomas

Pages: [1] 2