Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - bigben

Pages: [1] 2
General / Optimising roll for camera stations
« on: August 22, 2020, 09:23:11 AM »
HI. I'm trying to use Agisoft to align images for a single camera station where the roll is variable but the roll is always fixed at 0.  I noticed when exporting the Agisoft XML of the cameras that there appears to be a setting to enable roll. Is there any way to set this to true?

Code: [Select]
<camera id="0" sensor_id="1" label="DJI-0001">
          <reference x="144.960411945" y="-37.798632054999999" z="32.828000000000003" yaw="356.29999999999995" pitch="83.999999999999972" roll="-0" enabled="true" rotation_enabled="false"/>

General / Batch process > Run script
« on: July 19, 2018, 10:38:55 AM »
Hi All

I'm slowly building up a script for a workflow but for testing I'm also using the Batch Process option for most of the common tasks.  If I use Run Script as a batch process step I'm only getting the script running on one chunk.  Is this just a single script run and I have to use the script to process all chunks or is selecting All Chunk supposed to run the script on each chunk?

[edit] can this be moved to General Discussion please.... posted here by mistake

General / Camera-based chunk alignment/ merging chunks
« on: November 27, 2015, 01:58:26 PM »
Hi All

I'm working on a larger project at the moment and am trying to get a better practical understanding of aligning and merging chunks.

Original project (well the first 1/3 of it) is ~1200 cameras which I've split into 8 chunks of 200.  The model essentially follows a winding path, so each chunk has an overlap of about 50 cameras with the previous chunk for alignment.  I've done a couple of quick tests and the alignment works pretty well but I'm wondering what the best process is for generating the dense cloud.

Is it better to create a dense cloud of each chunk and merge them, or merge the aligned chunks and then create the dense cloud? Is either quicker?

If I create dense clouds of each chunk, there will be sections at the end of each chunk of increasing noise.  Each join between chunks would then have two sections of increased noise either side of the join which wouldn't be there if a dense cloud were created from all of the images at once.... but is doing the dense cloud with all images practical (time vs. quality)

Bits of the project are here:  I'm attempting to make a point cloud of the entire reserve including detailed sculptures.  Early tests are looking very promising but it's my first attempt at a significantly larger point cloud.  Any suggestions/advice gratefully accepted.

PS. Loving the 1.2 update.

General / Managing RAM for meshing with CloudCompare
« on: June 28, 2015, 04:06:03 PM »
Hi All

Apart from a lack of speed, my home system only has 8Gb of RAM and meshing can be a great challenge, especially when I can still generate relatively large point clouds.  I've found CloudCompare to be a very useful tool for both creating an evaluating meshes and have done a bit of testing to try and get my head around RAM usage of the poisson surface reconstruction and predicting what settings will exceed my RAM requirements.

My source point cloud had 34.9 million points and would freeze my computer when trying to generate a mesh with Photoscan. I exported as a PLY (no colours), loaded into CloudCompare and ran a series of  poisson recontructions with a range of Octree depth and Full depth settings, using Windows task manager to track peak RAM usage (and monitor RAM usage in case it looked like it was going to blow up). Results below Octree depth = 8 and Full depth = 8.

I'm not sure what settings Photoscan uses. I've seen Octree depths of 13 and 14 appear in the console, but nothing for full depth.  Judging by the appearance of meshes and the time it takes to crash on large point clouds I'm guessing it's 9.

I still need to do some testing with different size point clouds to try and get a handle on expected face counts from the size of the point cloud, but this is what I've got so far:

1. Start with slightly lower settings and monitor RAM usage.  If polygon count is too low for what I want, increase octree depth IF peak RAM usage is <33% of total RAM.  Face count increases about 3-3.5X for each Octree depth

2. Once face count is high enough, increase Full depth to get a better mesh IF the mesh isn't exhibiting too much noise AND peak RAM usage is <50% of total RAM. Face count does not increase much but fine details are improved.

In this test case, I got up to 28.5M faces on an awesome looking mesh.  These observations fit in with what I've got out of a 32Gb system. 

Hi All

I'm doing some tests for scoping a large digitisation project of historical aerial photographs (20,000 6"x6" negatives) and figured I should include steps in the workflow to prepare them for use in Photoscan or similar applications.

Digitised at 16bit greyscale
Levels adjustment (image of clock adjusted separately as it's quite dark)
Align images to template (we'll be using PTGui for this. Only offset and rotation will be optimised)
Output aligned images at fixed pixel dimensions
Keep at 16bit or convert to 8bit?

Anything else I should consider?

Regarding masking in Photoscan, I've done a reconstruction using only the central portion of the image.  Is there any significant value in including the side and bottom sections as well?  I've only got 4 negatives from a single flight path so I don't have much scope for testing. I can probably get PTGui to calculate a correction for falloff of the lens as well.

And while not related to Photoscan, any recommendations on scanning equipment or services that might do this with specific relevance to photogrammetry?  The collection is in Georgia

Quick test :

Not so much of a feature request (unless it works of course), but more thinking out loud about the alignment/optimisation process.

In short: in situations where the number/ position of cameras in a project is not optimal it may be possible to get better results by separating the optimisation of lens distortion parameters from the calculation of camera position/fov.

The longer version:
As a beta tester for a pano stitching application many years ago, one of the more challenging things to work out was a sequence of optimising camera orientation and lens parameters when virtually no data was provided (ie. automatic alignment).  In non-optimal situations (e.g. very narrow overlap in this case) the optimisation of lens distortion parameters could conflict with the optimisation of fov (typically k2 and fx/fy to use Photoscan equivalents), so whilst the overlap areas were well aligned, the overall image was noticeably distorted. The automated process was thus designed to firstly get optimised camera orientations and fov followed by the inclusion of lens distortion parameters.

Now I know there are some significant differences with photogrammetry, but from what I've seen, there do seem to be some similarities with the calculation of lens parameters.  To give an example I tried doing a reconstruction from 4 historical BW aerial photographs (scanned from negatives... ~80% overlap, single flight path).  Running them through with the default settings, the resulting terrain a) showed a typical "bowl" effect and b) the elevation was exaggerated vertically.

So based on my experience with pano stitching gone wrong I adapted the optimisation process to be a little closer to the sequence I use for panoramas.
1: Align images
2: Transfer the adjusted lens settings to the initial lens settings, but setting k1,k2,k3 to 0 and fixing the calibration
3: Repeat alignment with new settings
4: Weed out bad points
5: Optimise cameras (fov and offset only)

The result was much closer to what I was expecting

This would suggest to me that (in the absence of adequate data) the optimisation of lens parameters and camera positions are "overcompensating" for each other, leading to a mathematically correct result that is technically incorrect.  In such cases it may be possible to get a better result by excluding lens distortion parameters in the alignment phase, and also having the option to disable the optimisation of camera positions when optimising cameras (thus tweaking only the lens settings)

Just a thought, but may be worth exploring one day.

General / Working with camera station metadata
« on: March 01, 2015, 08:07:27 AM »
I ran a larger test using a pano rig as a camera station over the weekend.  First pass around the building with a 35mm lens (24-70mm zoom) and then a second lap closer to the building with  a 15mm full frame fisheye. Ran a few quick projects (, with some of the images and learnt a few things.
1. GPS occasionally drifted while shooting, so embedded GPS metadata was a bad idea as it threw out the alignment.
2. Grouping the cameras manually is a bit of a pain. I at least had the forethought to shoot an image of my hand in between groups... most of the time.

Given those two, I obviously need to do some preparation of the metadata before I add the images, so I'm wondering what the best way of doing this is.

 ??? Can I create a structured file to import the images and assign them to groups?  I tried exporting the cameras from a project with camera stations configured. Importing that file loaded the images fine but they weren't in groups. The number of images within a camera station varies so scripting the grouping probably isn't practical.

??? What to do with the GPS?  Clear the GPS from all but the first camera in each group (or would all cameras need coordinates set the same)... average the coordinates within a group ... If I can import the groups via a metadata file I'd probably handle that externally

Any suggestions/tips?

General / Camera station questions
« on: February 20, 2015, 03:30:36 AM »
Moving closer to looking at a pole mounted rig. I need to understand a couple of the finer points of camera stations in PS to see how they might fit in with the direction my shooting strategy is taking.

1. Is a small parallax error OK, or does it assume that there is no parallax change between cameras?
2. Are there any specific advantages to processing time and/or accuracy from defining images as a camera station or just leaving the in as separate cameras?

... and any other info/tips thankfully considered...

General / Photosynth -> Photoscan: cave sample
« on: February 09, 2015, 10:54:14 PM »
I was asked about shooting in caves, with a Photosynth link provided as an example:

I had a play with the images in Photoscan and thought I'd share the results as I found them very interesting.  I ran a shadows/highlight adjustment on the images in Photoshop first, boosting the shadows 50% with a 50% range

Medium alignment and dense cloud

Bug Reports / Can't load fiesheye lens calibration
« on: January 31, 2015, 07:13:19 AM »
I can save a lens calibration of a fisheye lens OK but when i try to load it into the Initial tab (same project or a new one) I just get an error "Can't load calibration". Sample xml attached.
Photoscan 1.1 (2004), Win7 x64

General / Taking fisheye to the extreme
« on: January 20, 2015, 04:08:58 AM »
Excited by my latest quick test of a combination I should have tried at the start.  Here's a teaser.

Shooting a more complete test this afternoon.

[update]Adding more samples here for those seeing this topic for the first time:

Test 1: repeated with optimised lens parameters

Test 2:
Entire building and surrounds
Full point cloud of this test:

General / Ball's Pyramid
« on: October 12, 2014, 05:05:30 AM »
My first real attempt at a GIS model. 6.9M faces, 4x8192px texture maps

I didn't take the photos. Weather only allowed for one lap of the island, so one side was looking straight into the sun. Tweaked the camera raw images as much as feasible and I was amazed at how well the reconstruction worked on the shaded side. 54 x 24mpx images (every 4th image provided)

General / Fisheye support OMG!!!
« on: September 25, 2014, 02:03:37 AM »
Late last week I started work on developing a strategy for shooting buildings and environments from ground level based on what I've learnt so far. The first test run ( was quite fruitful, as I managed to anticipate most of the problems I'd have and most of my strategies worked. There were however a number difficult challenges for which there didn't seem to be many easy answers except for take a whole lot images that weren't necessarily going to contribute to the model.

And along came the pre release and fisheye support. Nearly all of the difficulties I'd had related to space restrictions making it difficult to create tie points around corners. A fisheye lens is going to be useful for this because I can get both the building and the ground in the frame at close range, with the ground providing tie points around the corner.  Ran a quick test on the corner of a building at work and it was very promising. (Canon 1Ds III, 15mm f2.8 ) 

Went back yesterday and shot the whole building, including one wall where there was only 2m of space. As I was shooting it though, I kept thinking of  what I'd seen in the previous test and what else might work. I went a little overboard and 500 photos later I'd captured the building, surrounding gardens and a table setting. Did a quick alignment of 20 images to see what it was going to look like and was excited to see peripheral areas that I wasn't even thinking of showing up. The main chunk of 300 photos is processing now, but my mind has been in overdrive thinking of the possibilities for the various researchers I work with.
I'm going to predict that fisheye support in PS is going to en up a whole new range of possibilities and applications.

Bring on the revolution!  :D

General / How far can a GoPro go (2)
« on: September 24, 2014, 01:12:58 AM »
 So we all knew the limitations of using a GoPro in the absence of fisheye support but now that the pre-release is available it's time to revisit this question. First dense cloud is looking very promising compared to an earlier test:

The car looks amazing compared to the previous version

So given that it's still a 12Mp image scattered across a very wide fov it's sooooo much better.  I'm not going to be rushing out and using my GoPro for photogrammetry though, but rather explore where fisheye lenses have a practical application, which is obviously in tight spaces to begin with.  Can't connect to my VM at the moment which is frustrating me because I had a few chunks processing overnight from a Canon 1Ds with a 15mm lens. Sparse clouds looked very exciting, but that'll be for another thread.  ;)

...and the new model:

General / Busy day at 3D printing showcase
« on: September 12, 2014, 11:27:40 PM »
Our uni is currently hosting a 3D printing showcase organised by our IT Research guys (my department runs their printers).  Spent half of my time demoing Photoscan to people wondering how we made the models in our slideshow. So many people with projects waiting for that vital link to get them the data. Minds sufficiently blown ;)
Would be interesting to know if this translates to people buying the software, although I'm already aware of a few people who are using it now.  Hopefully we'll get a good user base that can share ideas and support.

Pages: [1] 2