Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - ThomasVD

Pages: [1] 2 3 ... 8
1
General / Re: Texture generation stuck at 49%
« on: February 19, 2019, 02:23:06 PM »
Hey Alexey,

Yeah the UV map is super fragmented (see attachment) but texture looks alright for our purposes :)

Thanks again!

2
General / Re: Texture generation stuck at 49%
« on: February 19, 2019, 09:36:13 AM »
Hi Alexey,

Thanks for the reply - that was probably it. I decided to leave texturing running overnight ("just to be sure") and after 5 hours the UV mapping apparently completed :) (followed by just half an hour of blending time)!

3
General / Texture generation stuck at 49%
« on: February 18, 2019, 06:41:23 PM »
Hey everyone,

I've encountered an issue I've never had before; my texture generation is stuck at 49% (parameterizing texture atlas).

I thought it might be an issue with the mesh geometry, so I tried first decimating the mesh to 1M polygons in ZBrush, that didn't help. Then I tried "repairing" the reduced mesh in ZBrush, Meshlab, MeshMixer, Rhino, 3D Builder and ReMESH but no matter which software I run "fix mesh" through, the mesh I import back into Metashape won't get past 49% texturing.

Any ideas?

4
Sometimes for these single line recordings it helps to turn off adaptive camera model fitting on alignment. But in general best way is indeed to ensure sideways overlap.

As Dave Martin suggests you could work with tight Z accuracy, but I'd recommend setting that to the camera positions rather than control points on the surface (set Z at 1.1m for each camera, define very strict Z accuracy, then optimize cameras). 

You can try Mohammed's suggestion of setting the camera profile to "fisheye" in the camera calibration menu, but I've found that in general the underwater light refraction "cancels out" to a certain extent the fisheye effect, so I've had better luck with just keeping it on standard camera profile (at least with GoPros underwater - might differ with other fisheye lenses).

5
General / Re: Adding images to existing aligned image set?
« on: December 21, 2018, 02:14:05 AM »
Oh wow, I somehow totally missed the fact that this feature had been added, thank you Kiesel and Alexey for responding so quickly and pointing it out - really glad to hear it's been introduced!

I had searched the forums for "add image to existing alignment" (and similar search queries) but couldn't find much, which is why I revived this thread from 2014, but of course now that I search for "incremental alignment" I get a lot of info on the new workflow.

However since I didn't know the feature existed I didn't have "Keep key points" checked in the Preferences pane :( Would the workflow as I described it above still work as a work-around for this particular case?

6
General / Re: How to detect "island alignments" in the set of photos
« on: December 20, 2018, 10:46:02 AM »
Hey Tom,

Seems like we're facing a lot of the same issues - I also work on underwater (sometimes interlaced video) data so that might explain it :)

Just want to bump this post because I'd also be interested in a solution. Not only in case of "ghosts" in the same chunk (within same alignment), but also when PhotoScan fails to align all images, but can in fact produce various different separate alignments from other images in the same image set.

To clarify: I often have a dataset of let's say 1500 relatively poor images. PhotoScan might only align 500, and then fail to align the others. I then have to duplicate the chunk, reset the alignment, pick the images which PhotoScan failed to align, choose "align selected cameras" and then often the software does manage to create a second (separate) alignment from another let's say 500 images. Repeat again, reset camera alignment, choose the remaining 500 images which so far failed to align, "align selected cameras" and voila, you now have 3 chunks each containing a large part of your scene, which you can start to merge using markers or whatever.

This is a bit of a silly tedious process given that I have to manually duplicate each chunk, reset cameras, re-run alignment on a subset of images, in order to see whether or not PhotoScan can somehow manage to align a second chunk of images in the same dataset. The example above is still OK, but if you have dozens of chunks based on 15 000+ images it gets a bit ridiculous.

By contrast software like RealityCapture just finishes alignment of your single dataset and then gives you 3 different chunks which contain the 3 different alignments it managed to produce from your data.

If anyone has a script that could do something similar that would be amazing - otherwise I'll submit a couple of feature requests later this week.

7
General / Re: Adding images to existing aligned image set?
« on: December 20, 2018, 10:25:20 AM »
A similar issue (and possible solution):
I want to add images to an existing alignment, I don't mind having to re-align all images, but the images that are already aligned should stay in approximately the same position.

Concretely the situation is:
- I've just processed a dataset of ca 15 000 video frames of an underwater shipwreck site. Because of bad visibility, bad lighting and bad overlap there has been a lot of manual work involved in getting all the images to align (a lot of "reset camera alignment, align selected cameras, manual marker placement and optimisation"), so I've worked on it for various days in order to get this far.
- Now I've received 400 additional, good images of the site from my client. The new photos probably won't contribute much to the geometry but could be very good for texturing.

=> I want to align these 400 images to the remaining 15 000 video frames without changing the position of the original images (too much).

Would this be the correct workflow?

- In the PhotoScan document with 15 000 aligned video frames, save the camera calibration of each camera group from the Camera Calibration menu.
- In the PhotoScan document with 15 000 aligned video frames, go to Reference pane, click "export", choose ".txt" format (PhotoScan says: "can't export reference data when I choose .xml)
- export the "cameras" making sure "save location" and "save rotation" are checked. In my case the estimated values will be saved, since I don't have ground control coordinates.

- make a new PhotoScan document, import the 15 000 image frames previously aligned.
- in the Camera Calibration menu, import the previous camera calibration of each group and check "fix calibration"
- in the reference pane, click "import", choose the .txt file containing my old reference data, and set XYZ and Yaw, Pitch, Roll column to match the column containing the estimated values from my previous alignment. If I now switch on cameras in 3D view, a blue ball appears for each camera location (shouldn't this be a blue square with a line, to show that the camera rotation is also known?)
- under Reference Settings adjust Camera Accuracy to something small like 0.05m => ie the cameras can only move 5cm from their current location? (how does this work when the alignment is not scaled yet?)
- add the 400 new images to the same PhotoScan document
- align all 15 400 images together. My settings would be: medium accuracy, generic preselection on, reference preselection on (so it uses the previous alignment), key point limit 40 000, tie point limit 0, adaptive camera model fitting checked (unless someone has suggestion as to why another setting would be better).

Hopefully by importing the Reference data and setting a strict camera accuracy PhotoScan will keep the old positions (more or less) and have an easier time aligning the images relative to each other (when compared to the very manual first time alignment)? I would appreciate any input from the community and developers!

Cheers,

Thomas

8
Bug Reports / Re: Speed of 1.3.2 onwards vs 1.3.0
« on: October 30, 2018, 09:36:00 AM »
Going to just leave Alexey's solution for version 1.4.2 onwards here for those troubleshooting this problem now (original post here: http://www.agisoft.com/forum/index.php?topic=9294.msg43358#msg43358). Works for me!

Alexey:
Hello dense_data,

Starting from PhotoScan version 1.4.2 (both Standard and Professional) you can modify that "hidden"setting using Tweaks dialog that is accessible from the Advanced Preferences tab.

Create the new entry with the following name:
main/dense_cloud_max_neighbors
and assign the desired pairs value, like 50 or 60, for example.

To remove the limit you can either delete the "tweak" or set the value to "-1".

9
Feature Requests / Re: E-mail alerts when finished processing
« on: September 29, 2017, 04:40:08 PM »
Hi all,

I think an email alert that could be selected with a processing step would be very helpful. This could possibly even be implemented as an option in batch processing so that I could choose to send emails after every step, when one finishes, or when the whole batch finishes. Our processing workstation is not used for any other purpose and it would be nice to not have to check it throughout the day.

Thanks,
Max

I'd like to second this, email alerts as an option within batch processing would be very helpful for us non-python wizards.

Edit: and given that this topic has over 6000 views I'd wager other people are also interested in this feature ;)

10
General / Bundler .out images mixed around
« on: June 21, 2017, 01:41:47 PM »
Hey everyone,

I'm trying to export my PhotoScan alignment / sparse point cloud as a Bundler (.out) file.

The procedure I use in PhotoScan is as follows:
- Make sure all cameras are aligned, remove unaligned cameras if necessary
- Tools => export => cameras => export as Bundler (.out) file
- Tools => export => undistort photos => filename template: {filenum}.{fileext} => save in same folder as .out bundler file

Sometimes this works great, other times it (partially) fails. On failed attempts, when I open the Bundler .out file in other software the sparse point cloud and camera positions are correctly shown, however the images have been mixed around (ie: image A is not where image A is supposed to be, but is in the position of for instance image D).

I think this is because the image numbers as exported in the .out file don't match the image numbers as produced by PhotoScan's {filenum} command, so the images get shuffled around. Is there any way to make sure the Bundler .out image numbers match the PhotoScan image numbers?

Maybe these ideas could help? 
- Instead of giving my photos names like "Part1_0000", "Part1b_0001", just save them all as a number format "0000", "0001", etc?
- Or go into PhotoScan's "details" photo view and sort the images based on a parameter other than image name?

Any other suggestions? Does anybody have an idea how the Bundler .out file format defines what should be image 0, 1, 2, ..?

Thanks in advance for any input!
Cheers,

Thomas

11
General / Re: single, dual gtx 1060 or single 1070, 980
« on: April 28, 2017, 12:11:51 PM »
Quick question: In the new version of PhotoScan since 1.3 , when using a dual GPU setup, is it still advisable to have a CPU with higher number of cores, since one core will be disabled per CPU?

I'm looking at a system with Dual GTX 1080's (not in SLI) + either an intel i7 7700K (4 cores) or an intel i7 6800K (6 cores).

12
1-2) Feature points extraction and matching steps do not depend on the camera calibration, so you can re-run alignment phase from Console using the following command: PhotoScan.app.document.activeChunk.alignPhotos()
It allows to skip long image matching procedure.

Would selecting all cameras, choosing 'reset camera alignment' and then selecting all cameras (all unaligned by now), and choosing 'align selected cameras' achieve the same result, or is there a difference?

13
General / Re: Technical: Optimisation of Merged Chunks
« on: March 17, 2017, 12:36:00 PM »
Thank you for your comments Geobit and Yoann,

Just a few thoughts, perhaps it's easiest if I use quotes: 

I must say I always get frustrated when our Alexey resumes something saying, "this can be done with Python...". (I suspect he smiles like an evil swot when he adds " so easily"... Yes it is great to have the py api, but not everyone knows how to use it, and there are dozens of simple but terribly useful things that should already be among menu items, and "Split in chunks" is one of them.
- I agree some things could just be integrated into the software, rather than having to rely on Python. For people who don't know Python (like me), remember there is also this library with scripts, including a "split in chunks" script: http://wiki.agisoft.com/wiki/Python I know the scripts didn't work anymore the day after PS 1.3 was released, but perhaps they've now been updated.

I usually merge them at the end after chunk alignment (no matter what method) and re-align the merged chunk. I'm almost sure that Photoscan takes the camera orientations in the merged chunk as initials so there is no need to export and create a new chunk from scratch, but on the contrary one can build the sparse cloud and optimize directly.
- I don't think this is in fact the case. I think if you re-run "align images" in a merged chunk, PhotoScan simply starts from scratch (perhaps unless you use pre-selection: reference) and re-calculates everything, ignoring the previously calculated camera orientations.

One goal of this procedure is to keep cameras grouped (calibration wise) as they were originally and that might be the right path in some cases. i.e. when chunks were photographed in diferent sesions with the same camera but eventually with diferent focus settings. (to create a chunk from blank might lead Pscan to put all photos in the same basket)
- Remember that in such cases you can also simply work in a single chunk, and then group photographs taken in different sessions in different camera calibration groups, using the camera calibration groups on the left of the Tools -> Camera Calibration window.

I think that there must be a reason to not keep (merge) tie points created during chunk to chunk alignment. My guess is that the quality of the feature point matches ch2ch cannot be evaluated in the same way as is done inside a chunk where they link photos-to-photos and their residuals can be miminized by moving photos relatively. On the contrary, during ch2ch alignment, the relative orientations between photos inside a chunk must be kept while the entire block moves during iterations to mimimize the global residual for all at once. After ch2ch alignment one tie point could have a very big reprojection error due to the rigidnes of the two structures it belongs to... well this is just my theory.
- This might be true. Even then though, in case they can't be compared in the same way, surely there is some way to interlink everything, and perhaps simply give less weight to tie points matched during chunk alignment. In case of camera-based chunk alignment though, if both chunks have been aligned using the same image alignment settings, I would think you could almost directly tranfser tie points from photo A to B to C with the same weight. But of course I don't know what the magic behind the scenes in the PS algorithms is, so this is something Alexey would have to comment on :)

Cheers,

Thomas

14
General / Re: Technical: Optimisation of Merged Chunks
« on: March 14, 2017, 06:11:42 PM »
Thank you Yoann and Alexey,

So just trying to wrap my head around the implications - please correct me if I'm wrong:

1. If working with multiple chunks, it's best to simply optimise each chunk individually (this gives you more control over the optimisation parameters for that specific part of the dataset), afterwards merge the chunks (and perform no more optimisation of the merged chunk). There is no advantage to optimising a merged chunk vs optimising the individual chunks, since the image sets from different chunks aren't "interlinked" after merging.

Exception: if using "marker-based chunk alignment" the chunks in question are interlinked, since the markers are considered as valid matches between images from the different chunks, but only when the "merge markers" option is chosen in the merge chunks dialogue.
Question: in this case is there a difference between marker types (green flags, blue flags and grey flags) in the photos during optimisation procedure (ie do the different flags for instance get a different weight)?

2. The only way to get a truly interlinked alignment result for optimisation, is to align all images in a single chunk.

Question: say I have a large dataset, and my cameras have no known coordinates - would it in this case be a good idea to align the images in different chunks, then align those chunks, export the approximate camera coordinates as reference, create a new chunk with all images in a single chunk, import the camera coordinates and re-align all images in a single chunk using "pair-preselection: reference"? In that case you would save on alignment time since we now have reference camera coordinates + all images would be interlinked in a single alignment result.

3. Is there a good reason why the chunk alignment results are not merged in "point-based chunk alignment" or "camera-based chunk alignment"?

In case of marker-based chunk alignment you already have the option to choose "merge markers" in merge chunks dialogue, thereby interlinking the chunks. Couldn't you just as easily: 
=> In the case of point-based chunk alignment have an option to choose "merge tie points" in merge chunks dialogue, since tie points have been compared between chunks?
=> In the case of camera-based chunk alignment (my personal preference) have an option to choose "merge cameras" in merge chunk dialogue, thereby avoiding duplicate cameras in the merged chunk AND connecting the tie points from photo A to photo B to photo C (see my example in original post)?

Wouldn't this allow working in different chunks (thereby significantly reducing alignment time) AND good optimisation of a merged chunk as a whole by essentially linking tie points from different chunks? Or am I missing something?

Cheers,

Tom

15
General / Technical: Optimisation of Merged Chunks
« on: March 13, 2017, 07:38:03 PM »
Dear Pros and Developers!

I have a technical question which I have not seen discussed here but which I feel can be very important to more advanced projects. 

In the optimisation of a 'normal' (not merged) chunk, PhotoScan uses whatever constraints it has (tie points, known distances, known coordinates, pre-calibrated camera parameters, ...) to optimize the sparse point cloud, camera position / orientation and camera calibration. The 'weight' of each of these constraints can be influenced by adjusting the Reference Settings pane. By removing certain constraints (such as bad tie points) the alignment result is optimised (a new "best-fit" model is calculated by bundle adjustment) and thereby the overall accuracy is hopefully increased.
In the case of a single chunk, all of these constraints are "linked", since all images have been compared and the tie points (generally the most important constraint in modern photogrammetry) form a single interlinked whole.

So far so good.

My question is: How does optimisation in a merged chunk work, and is this different depending on which chunk alignment method was used?

  • With point based chunk alignment the tie points of both chunks are compared to one another again (time consuming). If two chunks that have been aligned based on point based chunk alignment are merged, are matches between images from different chunks considered as valid matches between those images during the optimisation of the merged chunk?
  • With marker based chunk alignment the tie points from both chunks have not been compared to each other, so presumably after merging both chunks, there are no tie points between images from the different chunks, so these can't be taken into account as valid matches between images during optimisation? Are the two merged chunks then essentially optimised independently from each other (only taking non-matching feature-constraints into account for the optimisation as a whole)? Are the markers that were used to align the models considered "valid matches" between the images from both datasets (or is this perhaps only the case if you choose "merge markers" option during chunk merging?)
  • With camera based chunk alignment are the tie points of matching cameras merged? As an example: chunk 1 has valid matches between photo A and photo B, whereas chunk 2 has valid matches between photo B and photo C (photo A is not included in chunk 2). If I then perform camera based chunk alignment based on overlapping photo B, will the matches of photo B be "merged" in the merged chunk? Do I now have a single photo B which has matches in both photo A (from chunk 1) and photo C (from chunk 2). In this case the camera based chunk alignment would allow to again create a single inter-linked network of tie points, but I think this is not the case in PhotoScan since after merging chunks from camera-based chunk alignment the duplicate photos are also duplicated in the merged chunk. Is there a possibility to "merge" these camera positions into a single picture with tie points stretching across photos A, B and C?
If anyone has answers to these questions, or simply wants to contribute their thoughts on this matter, I think that'd be really useful!

It has important implications for questions such as: Which chunk alignment method allows for best optimisation? When precisely would be best to perform optimisation (before merging, after merging, ..)? Is good optimisation only possible if all images are aligned together (not working with chunks)?

Cheers!

Pages: [1] 2 3 ... 8