Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - SimonBrown

Pages: 1 [2] 3 4
Agisoft Viewer / The Viewer Comes of Age
« on: November 05, 2021, 04:24:56 PM »
If you have not spent time looking at the latest release of the Metashape Viewer, then I would urge you do so.

Its gone beyond a simple viewer and contains a rich set of tools for sharing and analysing 3D content...more details here:

One of our clients will be adopting it for sharing data amongst their user base.

Its a free product and with the latest release very much something Agisoft should be proud of.

General / Re: Processing a massive underwater photoset without GPS
« on: October 29, 2021, 11:43:02 AM »
Sequential would not like my mixing of tracks I guess

Mine neither. But that is not the reason I do not use it.

To reiterate
Quote will not match adjacent images in parallel runs
is the reason.

Sequential works well with planned mission drone flights. We divers are not drones. Its not going to fix your issue, but if you want to try it then I believe its based on the EXIF image creation time, not filename or folder structure.

Each camera image should align to its neighbour. But no alignment will occur between the camera image sets using sequential. A second alignment will be needed with Reset Alignment ticked off to align all points.

And to reiterate:

Unless the GPS data is accurate then adding it won't help.

If you really want to go the accurate underwater GPS route then its possible. A budget minimum of perhaps $70~100k would see the issue fixed, but it will still come with limitations, setup and working methodology that would add time to the work on site.

Remember, everything can be fixed...but it all comes with cost.

If you have not used video then don''t bother - stills will deliver more data for less work.

You may be generating more images and therefore more data than is required to produce a result. The entire Thistlegorm - 7 acres - took 24,000 DSLR images (see More images is not always best...its the correct number we really need.

Shooting at 3 to 4m from the subject should provide massive coverage...thats far too distant for my style which is typically 1~2m to allow the strobe light to restore gut feeling is there are more images than really needed in the project.

There is no magic silver plated solution here.

There is a mass of data to work through, You could spend more on processing hardware, change the cameras...but the basic approach of how the images are captured might be the best place to start - and its zero cost beyond the time needed.

General / Re: Processing a massive underwater photoset without GPS
« on: October 28, 2021, 06:53:10 PM »
Unless the GPS data is accurate then adding it won't help.

Test this by shooting a dataset with an iPhone...then use Reference Preselection set to source and watch what happens.

Last time I tested it, a mess as the embedded GPS values were good enough to tell me roughly in the world where the image was shot...and inaccurate enough to confuse Metashape.

We do work with underwater GPS and merge the camera positions into the JPEG data...but its never used for preselection. We have gone as far as writing a tool to merge GPS data with images - thats how useful it can be - but not for preselection. See for more details.

Sequential might work if you are very disciplined with the sequence of image creation, but it will not match adjacent images in parallel runs. Again, for good reasons we don't use this underwater.

So what to do?

First step is to look at the process of gathering images - are you shooting stills? Or deriving still images from video?

This is a fundamental question. Video is popular underwater, but it carries a whole set of issues still images simply do not generate.

So its worth understanding that before going further - what method do you use?

« on: October 21, 2021, 10:55:38 AM »
Two options:

The Agisoft Viewer for client side viewing. for hosted viewing.

General / AccuPixel Mentoring Program - Becky Kagan Schott
« on: October 09, 2021, 11:53:42 AM »
AccuPixel launch a mentoring program for photogrammetry and Metashape.

Full details in the link:

We would also wish to thank Agisoft for their support.

General / Re: Agisoft Viewer DEM ?
« on: September 21, 2021, 01:18:36 PM »
Did you set Raster transform to Palette on export?

The default is none - and this may be causing the issues you describe.

General / Re: When to run "Reduce Overlap", and what to follow it with?
« on: September 08, 2021, 10:39:37 AM »
I've also been running Optimise Cameras afterwards... Is this needed?

Yes...but the primary advantages of accuracy and the outputs used to measure the changes recursive optimisation brings can only be fully realised and exploited in the Professional version.

For representative shape models the Std version creates I think it's of limited value - unless anyone knows differently?

I saw your 'scuba human for scale' sub model.  Works great!

Its a singular, recognised unit - "human" - that most can relate to.

The vis in that manta video was the kind of conditions we dream of here in the English Channel...

The surface could be added as a chunk...and then moved to suit...but it feels more like a viewing/immersive/VR application rather than for the authoring tool - there is some work going on with a game engine and one of my models that is likely to deliver far more than the raw authoring tool such as Metashape could deliver.

I will share it if/when I can.

Scale - in the Standard version its not possible to add anything quantative or precise (Pro version required) but for scale we can add something everyone recognises.

A human.

Check this out:

Its the E49 wreck in Balta Sound. This was the first attempt at adding human scale by including a diver.

There are two divers in the SS Thistlegorm model, on the port side at the stern. Trouble is, the model is so massive they become lost. But they are there, in the ortho photo:

So perhaps a human would help? They need to remain very, very stationary and work quickly to capture them.

It won't help with the surface, but the fact they are in scuba gear will add underwater context too?

Glad to know its been inspiring.

I do a lot of underwater work ( but I must confess the idea of a surface hasn't really been something to consider - for me at least - even when working with coral data.

Are you including a scale bar or anything in the scene that can act as an indicator of scale?

This may be something more suitable in the VR/game engine world perhaps?

Unless I'm missing something?

General / Re: "Y-up" trackball rotation (orbit)?
« on: August 28, 2021, 06:07:23 PM »
Accupixel have released a transform helper - which includes View mode:

Not sure if it helps?

General / Transform Script - Free For All
« on: August 27, 2021, 10:27:32 AM »
AccuPixel Technical Director Jose has very kindly released a Python script to aid the Transform commands for View, Region and Object - full details and how to download here:

We are very grateful to Geobit Consulting for making this available at zero cost...and to Alexy for the changes required for Metashape Pro 1.7 release.

Any and all feedback most welcome - we hope you find the script useful.

General / Metashape Pro and Mountain Biking
« on: August 20, 2021, 03:06:58 PM »
Could photogrammetry help mountain biking?

When it comes to measuring and creating objective evidence of trail erosion it certainly has potential:

General / Re: Fragmented reconstruction (underwater)
« on: August 18, 2021, 03:11:40 PM »
Whilst the video may be continuous this does not guarantee success when it comes to aligning. Image blur caused by movement, out of focus, lack of detail & tie points will all trigger failure to align - take a close look at the images and see if there is any pattern to failure.

26 USBL measurements over 10 minutes will require interpolation to estimate the image locations for every frame between those taken at the same time as measurement - I would hesitate to suggest this will help with alignment or deliver a scaled model.

Extracting images from video will mean cameras are treated as N/C - not calibrated - and whilst Metashape can estimate the lack of focal length may be causing alignment issues.

ROV cameras tend to be good for seeing what is in front of the record video in low light...and guide the operator...they may not deliver high quality stills that work best for photogrammetry - can you share camera data?

We use a similar technique but work with GPS points taken every 2~4 seconds. Using these for scaling delivers very consistent results but would not use these values to aid camera alignment:

Not all images will need GPS reference for scaling and location, so the first steps would be to validate the source image quality, rerun alignment and then apply GPS values during recursive optimisation.

Jose and myself have recently added two topics dealing merging photogrammetry with laser scan data into the Professional online training course:

Access to new material for all existing students is included.

Pages: 1 [2] 3 4