Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - SimonBrown

Pages: [1] 2 3 4
1
General / Re: Photogrammetry with thermal images / water
« on: Today at 12:11:24 PM »
The banks of the river - all things being equal - will be point-rich and align.

Are you using a drone? If the width of the river is not too great then a flight at altitude where both banks are visible might align to make a single model.

There is a risk with this as tie points may become clustered...not good.

The river itself? This will be interpolated, where the mesh grows its boundaries until either it a) meets another piece of mesh, or b) the bounding box or c) the algorithm calls a halt.

So the data in the river would be made up...but...it might be possible to extract data from the aligned images. We are doing this with some data embedded in the jpeg (actually in the image...as pixels...) and pulling it into machine readable format...

But without seeing the source data and understanding what the outputs are the above may be relevant...or not. Can you share anything?

2
General / Re: How to straighten a 'banana' scan in MS Std?
« on: December 01, 2021, 01:11:46 PM »
Curvature on long & thin objects is a common theme. Most are unaware their creation is...bent.

Fixing it might be possible...its not something I have encountered in my own work but here goes:

Constraints - requires Pro
Calibration - take a robust section of another chunk that does not show curvature and export the camera calibration. Then load it into this project. It *might* work. It might not. With calibration having constraints always helps. The problem with the std version is we won't really know how curved, or not, the section is - there is no way of validating what has been created.

Curvature is best avoided by doing long, thin runs.

3
General / Metashape Collision Investigation Webinar
« on: November 24, 2021, 08:19:02 PM »
Next month I will be delivering a webinar on using cost effective kit for #photogrammetry in conjunction the Insitiute of Traffic Accident Investigators.
An overview of Agisoft's Viewer and its benefits will be included too.

Its an event open to all and free. More details, including a link for registration, here:
https://www.itai.org/event-4578739

4
General / Re: Seabed mapping -> alignment of 100.000 pics
« on: November 19, 2021, 06:34:05 PM »
By "patches" do you mean separate chunks?

5
General / Re: The Viewer Comes of Age
« on: November 08, 2021, 06:16:51 PM »
What's the best format to export out of Metashape Standard, for viewing a vertex coloured shaded model, or the textured model in Viewer?

The real step-change for the viewer is the ability to measure, interrogate and analyse the model and for this a scaled and (optionally) geo located model is required.

Apart from "It looks like this" - which in itself can be of some value - the standalone viewer can't really do much more than online viewing tools when it comes to handling Metashape Standard output I'm afraid.

Opitmising images, scaling, geo referencing and the accompanying outputs are what the Pro version delivers and what the Viewer is now designed to handle and work with.

As for what format...OBJ, FBX...whatever really. Some are physically smaller than others, but not tested every variation.


6
General / Re: Processing a massive underwater photoset without GPS
« on: November 06, 2021, 02:21:09 PM »
Quote
The troublesome scan i was testing with Sequential, it failed to align 2/3 of the cameras, rather than a the 10% before.  I'm guessing because the sequence was 'broken' by a run of a few bad quality cameras..

Sequential really is aimed at automated and machine executed missions. Breaking the sequence will break consecutive alignment.

Quote
When is using Estimated for a 2nd run of Align Photos worth doing?

Every time you use sequential. Make sure Keep Tie Points is checked in preferences. Estimated will then cross-align the images.

Quote
And thinking back to a scan from yesterday, it did have the last section of the scan shooting off on a completely wrong plane...  So i selected all of those cameras and realigned them, and they then aligned great...

Check the number of tie points in the Reference pane for these cameras. If its less than 100 then consider their alignment weak and likely to see them removed during recursive optimisation - which is required if accuracy of output is a goal.

Quote
I'm having to live on less than £10 a day at the moment, as I'm proper skint now.  So i'm stuck with the cameras and hardware i already had

Dont change your kit. Change your capture method. Do you really need all those images? Or could a lesser amount do? Process and align what is needed - this will save time.

7
General / The Viewer Comes of Age
« on: November 05, 2021, 04:24:56 PM »
If you have not spent time looking at the latest release of the Metashape Viewer, then I would urge you do so.

Its gone beyond a simple viewer and contains a rich set of tools for sharing and analysing 3D content...more details here:

https://accupixel.co.uk/2021/11/04/value-the-view/

One of our clients will be adopting it for sharing data amongst their user base.

Its a free product and with the latest release very much something Agisoft should be proud of.

8
General / Re: Processing a massive underwater photoset without GPS
« on: October 29, 2021, 11:43:02 AM »
Quote
Sequential would not like my mixing of tracks I guess

Mine neither. But that is not the reason I do not use it.

To reiterate
Quote
...it will not match adjacent images in parallel runs
is the reason.

Sequential works well with planned mission drone flights. We divers are not drones. Its not going to fix your issue, but if you want to try it then I believe its based on the EXIF image creation time, not filename or folder structure.

Each camera image should align to its neighbour. But no alignment will occur between the camera image sets using sequential. A second alignment will be needed with Reset Alignment ticked off to align all points.

And to reiterate:

Quote
Unless the GPS data is accurate then adding it won't help.

If you really want to go the accurate underwater GPS route then its possible. A budget minimum of perhaps $70~100k would see the issue fixed, but it will still come with limitations, setup and working methodology that would add time to the work on site.

Remember, everything can be fixed...but it all comes with cost.

If you have not used video then don''t bother - stills will deliver more data for less work.

You may be generating more images and therefore more data than is required to produce a result. The entire Thistlegorm - 7 acres - took 24,000 DSLR images (see https://deep3d.co.uk/2020/05/03/the-ss-thistlegorm-content/. More images is not always best...its the correct number we really need.

Shooting at 3 to 4m from the subject should provide massive coverage...thats far too distant for my style which is typically 1~2m to allow the strobe light to restore colour...my gut feeling is there are more images than really needed in the project.

There is no magic silver plated solution here.

There is a mass of data to work through, You could spend more on processing hardware, change the cameras...but the basic approach of how the images are captured might be the best place to start - and its zero cost beyond the time needed.

9
General / Re: Processing a massive underwater photoset without GPS
« on: October 28, 2021, 06:53:10 PM »
Unless the GPS data is accurate then adding it won't help.

Test this by shooting a dataset with an iPhone...then use Reference Preselection set to source and watch what happens.

Last time I tested it, a mess as the embedded GPS values were good enough to tell me roughly in the world where the image was shot...and inaccurate enough to confuse Metashape.

We do work with underwater GPS and merge the camera positions into the JPEG data...but its never used for preselection. We have gone as far as writing a tool to merge GPS data with images - thats how useful it can be - but not for preselection. See https://accupixel.co.uk/2021/07/26/new-release-gps-position-processing/ for more details.

Sequential might work if you are very disciplined with the sequence of image creation, but it will not match adjacent images in parallel runs. Again, for good reasons we don't use this underwater.

So what to do?

First step is to look at the process of gathering images - are you shooting stills? Or deriving still images from video?

This is a fundamental question. Video is popular underwater, but it carries a whole set of issues still images simply do not generate.

So its worth understanding that before going further - what method do you use?

10
General / Re: WHAT ONLINE VIEWER FOR CLIENT TO PLAY METASHAPE RESULT ?
« on: October 21, 2021, 10:55:38 AM »
Two options:

The Agisoft Viewer for client side viewing.

https://construkted.com for hosted viewing.


11
General / AccuPixel Mentoring Program - Becky Kagan Schott
« on: October 09, 2021, 11:53:42 AM »
AccuPixel launch a mentoring program for photogrammetry and Metashape.

Full details in the link:

https://accupixel.co.uk/2021/10/09/accupixel-mentorship-becky-kagan-schott/

We would also wish to thank Agisoft for their support.

12
General / Re: Agisoft Viewer DEM ?
« on: September 21, 2021, 01:18:36 PM »
Did you set Raster transform to Palette on export?

The default is none - and this may be causing the issues you describe.

13
General / Re: When to run "Reduce Overlap", and what to follow it with?
« on: September 08, 2021, 10:39:37 AM »
I've also been running Optimise Cameras afterwards... Is this needed?

Yes...but the primary advantages of accuracy and the outputs used to measure the changes recursive optimisation brings can only be fully realised and exploited in the Professional version.

For representative shape models the Std version creates I think it's of limited value - unless anyone knows differently?

14
I saw your 'scuba human for scale' sub model.  Works great!

Its a singular, recognised unit - "human" - that most can relate to.

The vis in that manta video was the kind of conditions we dream of here in the English Channel...

The surface could be added as a chunk...and then moved to suit...but it feels more like a viewing/immersive/VR application rather than for the authoring tool - there is some work going on with a game engine and one of my models that is likely to deliver far more than the raw authoring tool such as Metashape could deliver.

I will share it if/when I can.

15
Scale - in the Standard version its not possible to add anything quantative or precise (Pro version required) but for scale we can add something everyone recognises.

A human.

Check this out:

https://sketchfab.com/3d-models/underwater-wreck-of-hm-submarine-e-49-40d1c47a4d7447feb42e643cea895d7a

Its the E49 wreck in Balta Sound. This was the first attempt at adding human scale by including a diver.

There are two divers in the SS Thistlegorm model, on the port side at the stern. Trouble is, the model is so massive they become lost. But they are there, in the ortho photo:

https://dronelab.io/map/public/viewer/e5a43bc124a34ea2b4bcfd1d2843463e

So perhaps a human would help? They need to remain very, very stationary and work quickly to capture them.

It won't help with the surface, but the fact they are in scuba gear will add underwater context too?

Glad to know its been inspiring.

Pages: [1] 2 3 4