Forum

Author Topic: Built in benchmarking  (Read 1568 times)

CheeseAndJamSandwich

  • Full Member
  • ***
  • Posts: 189
    • View Profile
    • Sketchfab Models
Built in benchmarking
« on: December 18, 2024, 01:59:19 AM »
As benchmarking is topical right now with Andyroo's findings, could we get some built-in benchmarking tools?

Something that give us some rough scores for the hardware/os/settings we're using.

Perhaps some synthetic tests that give scores for each stage of each workflow task.  Giving scores for CPU, GPU, Storage, etc.
These could be quick tests that just take a few mins to run.  Perhaps downloading a 'small' set of sample date to run.

And perhaps a set of 'real-world' datasets that you might have, or we could donate data to.
Then we could test a drone scan, lidar scan, low-res scans, very high-res scans, underwater scan (with no georef data), a turntable scan, a body scan, etc. etc. etc.
These all tax Metashape and the hardware in different ways, to different amounts, so we'd be able to run a benchmark that best fits our own work.

This should allow us to see what hardware configs to go with and to also keep track of how Metashape versions differ from each other.
My 'little' scan of our dive site, 'Manta Point'.  Mantas & divers photoshopped in for scale!
https://postimg.cc/K1sXypzs
Sketchfab Models:
https://sketchfab.com/cheeseandjamsandwich/models

CheeseAndJamSandwich

  • Full Member
  • ***
  • Posts: 189
    • View Profile
    • Sketchfab Models
Re: Built in benchmarking
« Reply #1 on: January 09, 2025, 04:07:27 PM »
BYOD:  Bring your own Dataset...

Along with the built-in synthetic benchmarks, testing and giving scores for every subtask, for every workflow task...
And of course the selection of Real-World datasets, including user donated ones, which cover the broad range of different typical and photogrammetry uses, and of sizes large and small...

Could we also have the feature that allows us to feed the MS Benchmark tool with our own dataset, where it then goes off and documents, times, details processing rates, for every subtask of every workflow task?

Maybe we could also just get some more detailed reporting each time MS completes any/every job we give it?  Will a lot more broken down details, times, etc?

You often ask us for the times of the finer subtasks when we say things are faster, or slower.  Could this reporting data be made a lot easier to get?
My 'little' scan of our dive site, 'Manta Point'.  Mantas & divers photoshopped in for scale!
https://postimg.cc/K1sXypzs
Sketchfab Models:
https://sketchfab.com/cheeseandjamsandwich/models

CheeseAndJamSandwich

  • Full Member
  • ***
  • Posts: 189
    • View Profile
    • Sketchfab Models
Re: Built in benchmarking
« Reply #2 on: February 11, 2025, 03:40:44 PM »
Now that we have new GPUs on offer (although unobtainable, and unaffordable)...

Could we at least have some ways of getting detailed breakdowns of processing performance, timings for each sub-task, etc. etc.  So we can report meaningful and useful results from any tests that the lucky folk might get for us?
And again, with Bzuco noticing that running more than the default 2 GPU threads greatly improves Depth Maps generation, having detailed results and tools would be very helpful!
My 'little' scan of our dive site, 'Manta Point'.  Mantas & divers photoshopped in for scale!
https://postimg.cc/K1sXypzs
Sketchfab Models:
https://sketchfab.com/cheeseandjamsandwich/models