Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - SimonBrown

Pages: [1] 2 3 ... 6
1
Masks are the answer. They can be created in Metashape or external applications such as Photoshop.

We have recently updated our Foundation Course to include working with these, and there is a great worked example that can be downloaded and worked through.

https://accupixel.co.uk/courses/metashape-foundation-course-for-forensics-surveying/

One tip: Get the subject to fill as much as the frame as possible. Working from distance and then masking will work, but you are losing detail that could have included the subject.

2
General / Re: Create a clean hole in my 3D model
« on: September 25, 2024, 07:40:26 PM »
A polygon mesh is not going to resolve to a circular feature like what you find in a CAD model.

This sounds like a reverse engineering question? It's a two step process where the object is scanned a polygon mesh is created, and then the surfaces extracted to create planar or cylindrical CAD model surfaces that accurately describe geometry, like a cylindrical hole.

More here:

https://accupixel.co.uk/2023/11/09/reverse-engineering-photogrammetry/

CAD software does not easily resolve polygon meshes...hence the extraction step.


3
The absolute and best path to accurate photogrammetry is to make sure the source images are shot with the end game in mind...are technically correct...and carry the necessary overlap. This priniple is hammered home in our https://accupixel.co.uk/courses/metashape-foundation-course-for-forensics-surveying/ course.

But in this instance it's not the case. First suggestion would be to review the images and select a set that appear to be shot from the same distance-to-subject. Then see if there is any correlation or grouping of camera and if there is a common type you could try putting them in a single sub-folder, or folders if there are a few common groupings.

Forget using any embedded GPS at alignment. Camera phone accuracy causes more havoc with alignment so turn it off. Maybe select some after alignment and recursive optimisation.

I would hesitate to use markers to align. This is a bit advanced (we teach it in the Pro course) and its best used when there is some common overlap but not enough tie points, such as when working in areas of vegetation.

Clean, uniform source images will always work best. But if not available then we work with what we have...so give it a go.

4
General / Re: Hardware for Large Scale Projects
« on: September 21, 2024, 01:45:07 PM »
I'm working on a prerequisite that each of the 180,000 images carry optimal overlap with their neighbours?

If yes then its a hardware discussion.

If not, then excessive overlap is a known issue that will dramatically increase processing times. One real world example of how a different approach was taken to solve the processing times issue:

https://accupixel.co.uk/2022/05/13/photogrammetry-consultancy/

No one - particularly those who work in a less structured environment where mission planning with preset overlap variables - will want to leave site with "the one" frame missing. This can be a common issue when working underwater.

It can be better to cull the images after capture and only process what is necessary.

To reiterste - I do not know the full details and it may not be applicable here.

5
If I recall, a similar approach was used to model the Treasury building in Petra, Jordan?

First question; what do you mean by "accuracy"?

The scaling/measurements in the scene?

The geo location?

Or the image alignment?

6
General / Re: Optimize camera provides worse dense cloud
« on: September 18, 2024, 10:31:25 AM »
What optimisation steps are you using?

What is the mean reprojection error for the aligned images?

7
Another update to the AccuPixel Metashape Foundation course has been published, covering how to generate masks from the model.

This technique is a very quick way to exclude material from the final mesh, reducing post-processing edits, or to permit the alignment of other cameras.

The topic uses the bulls skull example and is available for download:

https://accupixel.co.uk/courses/metashape-foundation-course-for-forensics-surveying/

8
General / Re: Underwater Photogrammetry with GoPro Hero 12
« on: September 04, 2024, 08:21:52 PM »
Quote
Adding more cameras makes the scanning faster.
It's very, very simple logic.
Instead of swimming 2 parallel tracks, mount 2x cameras as that same pitch.  And swim once!
This is unarguable logic.  Yes?

On the face of it, it's a plausible argument. If the intention is to create a visual asset...kind of...if the intention is to deliver something scaled and/or geo referenced, no.

Please have a look at the Vaast https://vaarst.com and Voyis https://voyis.comcamera systems. Ask yourself "Why are they building multiple rig camera systems for 3D and why are they not hanging their cameras on a 2m pole?

We work with both cameras, to a greater or lesser degree, and both are delivering exceptional scaling accuracy.  And accuracy - quantified and qualified measurement - is precisely where photogrammetry can and should be applying massive value.

True, this can be achieved with scaling constraints, GPS value and so forth but then diver tasking loading goes up and efficiency drops...but to use a synced pair of cameras removes all other constraints and efficiency - the ability to cover more distance in water AND gather quantifiable images - goes up. And it goes up more than you think...

So by all means simply create and cover more..but try asking a question of the model...such as area/how big, distance/how far or volume/how much...and suddenly being faster does not help. None of those can be answered without constraint...and having more of that is...not going to change a dam thing.

I really think we are thinking at different ends of the game here. Wanting more area is fine. Wanting quantified answers is fine too. For me, any 3D model lacking constraint cannot go beyond "Look...it's cool...' and people then move on, which is OK if thats all that is needed.

It's the end purpose that separates us. I'm still pulling data from years ago and asking new questions and getting new answers...because its there and I can. Every client and I need quantified data.

Not everyone needs that, but thats not where my head is.


9
The AccuPixel Foundation course for Metashape Standard has been updated to include working with masks generated in 3rd party apps. Link to the course is here:

https://accupixel.co.uk/courses/metashape-standard-edition/

The new topic includes a complete dataset - images and masks - and walks through the process where the subject moves within a scene, but the source images are masked to remove the background to create a single floating model:

https://construkted.com/asset/ab30e26i80z/

The new course content is available to all existing students, and is ready for download.

10
General / Re: Underwater Photogrammetry with GoPro Hero 12
« on: August 30, 2024, 05:27:59 PM »
Quote
Adding cameras makes it faster.
This was just tip on making scanning reefs faster with GoPros, like the one he's using.

Yes, but the OP's problem is a bent model whose root cause is (likely) to be related to image quality, or method, or a combination of both.

Adding in more cameras and using the same methods will not address the root cause, will it?

And faster is absolutely not always better.


11
General / Re: Underwater Photogrammetry with GoPro Hero 12
« on: August 30, 2024, 01:31:30 PM »
Quote
I think my mapped area is about 350m wide, 200m tall...  So 2.5 Thistlegorm by 1.5 Thistlegorms.

Thats about 7ha.

Thistlegorm including interior spaces is 2.8053ha.

Quote
So 2x of your Nikon rigs, with their strobes, would be faster at scanning than using just 1... And you'll get the solid data down the middle of the swath...

The design of the lenses is geared around making a single, robust path. Not to extend the coverage.

The single and highest value of a two (or more) camera design is the constraint between the two cameras. From this anything can be measured. The focus (no pun) remains on capturing high quality source images that are blur free, in focus and sharp.

This example has several scale bars in the scene. All are 0.25m across the furthest targets and none were used as constraints. The sole source of constraint is the distance between the two cameras:

https://construkted.com/asset/agtfws69uwk/

Syncing was not intended to be a bonus, but an absolute requirement before sourcing a second camera. Had syncing been impossible, zero would have been spent on additional kit.

Quote
If i'd scanned it at 1-2m altitude, i'd still be down there today! And probably owing some deco time

At a range of 1~2m the Thistlegorm took, including deco, 13 hours of in-water time. It's a price that has to be paid for sharp, lit images. Again, there are no shortcuts in UW photography.

Quote
But again, i think you're missing the point...  IF you're stuck or not with gopro cameras, and with no lights, it doesn't matter what you're scanning, if you add a 2nd camera, it'll be twice as fast!

Add a camera? Or add a light?

Adding a camera will increase the volume of images and thus area. Adding a light will increase the quality of images.

The OP has a choice here. Both options involve spending money. I would really, really hesitate to recommend adding more cameras - creating low quality source images - as a way to resolve curvature or alignment issues.

Quote
IF you require better quality scans, you can change the altitude, the camera/path pitch, the lighting, the cameras, etc. etc.

If you are wanting to monitor something on the seabed, to record if/how it changes, then quantitative and qualitative data is an absolute requirement?

Otherwise, how can anything be compared/measured?

Bottom line; Improving the source images will not be achieved by adding more cameras. More blurred images will not resolve what is - I believe - a fundamental question of image quality.

Whereas; Changing the technique and methods - including artificial light - will.

I am not suggesting for one minute your work at Manta Point isn't impressive - it is - but chucking more cameras and  images at the OPs problem might not be the solution here.

12
General / Re: Underwater Photogrammetry with GoPro Hero 12
« on: August 30, 2024, 09:45:55 AM »
I can understand why sharpness is seen as poor. This:
Quote
Sharpness is poor to start with.  Water doesn't not have the clarity of air, or as much light available
And this:
Quote
This is why my Manta Point dive site scanning is done at an altitude of just 4-5m!

At this distance a housed DSLR would comfortably produce fuzzy, soft and unsharp underwater images. With a tiny sensor (compared to a DSLR) a GoPro has the odds stacked against it. 

The greatest challenge is not suspended matter - backscatter - but how water robs light. When I first picked up an UW camera I was given a piece of advice:

Get close. Then get closer still.

The single reason we all shoot with ultra wide angle lenses is typically their ability to focus very close to the front element, maybe just 20cm from the subject. Working close to the subject removes the maximum amount of water.

This example was shot at a distance of 1~2m in the English Channel. Vis was maybe 3~4m max:
https://construkted.com/asset/a4wijcuy6yw/

This one at the same distance, in better vis - 6~8m - but almost pitch black conditions thanks to a layer of plankton:
https://construkted.com/asset/art1ve7lisi/

And this one was shot in much better conditions, but again the 1~2m distance to subject was used:
https://construkted.com/asset/a3q7vacu0cq/

A GoPro will deliver better results when working much closer than 4~5m. It's not a question of kit (Kind of...read on...) but a question of method and technique that will go some way to really improve sharpness.

The second thing that will dramatically improve sharpness is artificial lighting. We cant control the shutter speed on a GoPro...but throw more light at the sensor and the camera will do that automatically. Lighting can be fixed and constant, which is better than no light, but strobes are better still as they deliver a short burst of intense light that helps freeze the action and improve sharpness.

Strobes really help improve efficiency as you can work much faster - less risk of motion blur.

Backscatter - the suspended particles - are always present. But photogrammetry software ignores them as they are moving relative to the primary subject. Any clumps of tie points from backscatter can be reduced using recursive optimisation, or manually excluded.

Why does sharpness matter?

Photogrammetry software will cope with blurred, soft or noisy images. They will align and things might look OK...the model always looks OK and without any constraints, who can really say if there are accuracy issues?

But when faced with specific issues - like the OP with a bent model - then the first look at what has gone wrong starts with the source images.

Look at the reprojection errors for each image. Anything >1.0 pixels is not good. Look at the number of tie points. Anything <100 is not good. Together they may (stress may, not seen the data) be causing curvature, as discussed in the paper I linked to.

There are few shortcuts in UW photography. We have supplied some technical services to design a rig to work at 3~4m and deliver a GSD of <1mm, but each camera sensor is 64mp and there is a huge amount of artificial light available. Even with this, the ROV speed is/was absolutely critical...too fast and it's all blurred.

I would urge anyone working UW to work really close to the subject, and to include artificial light. Quality jumps and with it the chances of greater, repeatable accuracy.

I would also hesitate to suggest that adding more cameras into a method that is creating images that lack inherent quality is the correct way to fix what appears to be fundamental image quality issues?

Unconvinced? Test it...work close and compare the results. Check the EXIF data for shutter speed and see if handholding a camera is going to induce motion blur. Borrow a light and repeat the test.

ugurdemirezen - if you can share the source images, drop me message here https://accupixel.co.uk/contact-us/ and I will be happy to take a look and see what the root cause may be.

13
General / Re: Banana Effect and "Not Enough Reference Data" for GCPs
« on: August 29, 2024, 10:11:54 AM »
Single long strips of images can struggle to robustly align and this method should be avoided.

Causes of curvature is discussed here:

https://www.mdpi.com/2077-1312/12/2/280

It's related to UW work but applies to terrestrial images too.

Finally, dont go down the rabbit hole of thinking camera calibration will help:

https://accupixel.co.uk/2023/09/06/course-update-manual-camera-calibration/

Recalibrated cameras can be an advantage in certain scenarios, but not on a texture-rich coral reef.

14
General / Re: how to fix iphone wide angle lens distortions
« on: August 29, 2024, 10:06:59 AM »
I regularly work with 16mm fisheye lenses. Photogrammetry can cope with this distortion as long as it's fed a predictable set of information.

Do not correct or crop images. Use them straight and with supplied EXIF.


15
General / Re: Underwater Photogrammetry with GoPro Hero 12
« on: August 29, 2024, 10:03:22 AM »
Quote
How would a stereo pair of GoPros work?  Bearing in mind that there's no way to synchronise both cameras...

This is one reason, among others, why AccuPixel would not recommend using GoPro action cameras.

Quote
And wrt coverage, I've had fantastic results by adding more cameras, to increase coverage!
Adding GCPs into UW datasets is typically expensive. Using diver derived measurements is recognised as a great way to induce errors.

Achieving accurate underwater constraints can be done, but the costs and time can be prohibitive....whereas...

Synchronised cameras produce inherent scaling and this delivers a scaled model....and scaled models are far more valuable than data that merely represents the shape. Plus the efficiency of gathering reliable, scalable data jumps up - no need for the diver to measure anything, or even deploy a scale bar.

So whilst it may seem using more cameras is improving efficiency, in reality it's just more photos. Data created - without additional constraints or geo location - is of very limited value apart from being able to look at a model, spin it round and get an overall orientation of a site.

This paper explains more on why geo location and constraints add value: https://www.mdpi.com/2077-1312/12/2/280

There is a second paper coming out that discusses how unconstrained 3D data can be used to add value, but only when its supported with a really robust and accurate underlying set of scaled data.

Quote
And underwater, sharpness is poor to start with!

Errr...No, it's not.
Sharpness is and should be inherent for every UW image if the correct method and equipment is used. If you are seeing blurred, out of focus or less than sharp images then the odds of producing a robust model are compromised I'm afraid.


Pages: [1] 2 3 ... 6