Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - andyroo

Pages: 1 2 [3] 4 5 ... 30
31
Bug Reports / Re: Missing GEOID12B_AK GEOID file for Alaska/United States
« on: December 12, 2022, 10:39:29 AM »
Hi Alexey, zones EPSG 3338 and EPSG 6330 to EPSG 6339.

32
Bug Reports / Re: Missing GEOID12B_AK GEOID file for Alaska/United States
« on: December 09, 2022, 03:52:51 AM »
Tried to install from the above location - we get this error:

libtiff error: Not a TIFF or MDI file, bad magic number 2570 (0xa0a)
TIFFClientOpen: unexpected error: <snip>/apps/metashape/1.8.4-pro/geoids/us_noaa_g2012ba0.tif

33
I just set up four a batch of DEM exports of large areas with 16384x16384 tiles so I can build COGs (which I also wish I could export directly from metashape) and it's painful to see only one node working, even though there are hundreds of tiles to build. Seems like this could be done similar to building orthos/dems and have many subtasks that could be node-distributed.

Still love the software though, and the dev responsiveness. Excited for the upcoming 2.0!

34
Feature Requests / Cloud-optimized geotiff export
« on: December 05, 2022, 08:21:02 AM »
I would love to be able to export DEMs and orthos with cloud-optimized geotiff (COG) format, specifying tile size, compression, predictor etc. We are using COGs to provide cloud-ready ortho and DEM data that can be queried or viewed without downloading an entire dataset (and tile servers such as TiTiler can digest and serve too).

As-is, it takes me MUCH  longer to convert the exported TIF than it does to create it. I would be very grateful if I could natively export COGs...

Currently I export geotiffs and convert using GDAL. For large geotiffs I batch-export in tiles (16384 x 16384) into <dest_dir>/{{chunklabel}/{chunklabel}.tif and convert like this:

for DIR in *; do gdalwarp "$DIR"/**.tif ./cog/"$DIR"_<projection>_cog.tif -of cog -overwrite -multi -wm 80% -co BLOCKSIZE=256 -co BIGTIFF=YES -co COMPRESS=DEFLATE -co PREDICTOR=YES -co NUM_THREADS=ALL_CPUS; done

Note this is basically a re-ask of my post from 2 years ago.

Andy

35
Bug Reports / Missing GEOID12B_AK GEOID file for Alaska/United States
« on: December 02, 2022, 04:24:17 AM »
I tried to create a compound coordinate system to export data in NAVD88 vertical datum and got the error

"Selected vertical datum is unavailable. Please configure datum transformation."

And that's when I realized that on the geoids page Metashape is missing the GEOID12B_AK file from NGS.

Now that I look into it a little more and I see that you're using GTG format, can I just install it myself from the PROJ-data repo here?

36
TLDR - working volume is a VERY IMPORTANT variable to control. minimizing working volume/bounding boxes reduced processing time by ~100x

I copied the project while it was still running and was able to save the depth maps from that step. Then I copied the chunk into three separate chunks and deleted photos from each where there were natural breaks in the sparse cloud, to make three sub-areas. I minimized the bounding box/working volume to barely cover the sparse cloud in x,y,z and restarted dense cloud generation (reuse depth maps enabled).

37
I'm working on a shoreline reconstruction over about 1,000 km of curvy shoreline (bounding rectangle covers an area of about 100,000 sq km) and during dense cloud processing, it's chopped into 21768 nodes/slices. 

I noticed during dense cloud initialization/pre-processing (when one of my nodes terminated) that the working volume of the project was very large (I believe in the quintillion units), but when pre-processing was complete, everything seemed to be going fine... until the project was nearly done and I noticed that several nodes were/are still working on the first few numbered slices/nodes which also have a HUGE working volume (like 1 - 2 x 10^16 - 10^17). These nodes are still making progress - about 4-6% every 24 hours, and their relative progress seems to be tied to their size.

I realize now that I should have at least gone into the bounding box and really reduced the height, but I'm wondering how dense cloud processing scales with volume - if I broke the project up where there are breaks in the shoreline (rivers/inlets) or strong curvatures in the shoreline, would it process much faster as several small projects because their total volume was lower, even though the total number of images (~40,000) is the same?

38
Bug Reports / Re: Vertical Datum out of range 1.8.4
« on: October 19, 2022, 10:09:26 PM »
working on 1.8.4 with both old and new geoid installed:

egm96-15.tif  navd88-12b.tif  us_noaa_g2012b.tif

I'm getting the "vertical datum out of range" error in 1.8.4 when trying to build a DEM in NAD83UTM17N (EPSG:6346)+NAVD88 from a point cloud in NAD83(2011) (EPSG:6318) ellipsoid height. Working in SW Florida:

2022-10-19 13:39:12 BuildDem.buildRawTiles (64/99): source_data = Dense cloud, subtask = buildRawTiles, interpolation = Disabled, projection = NAD83(2011) / UTM zone 17N + NAVD88, classes = Created (never classified), resolution = 0
2022-10-19 13:39:13 loaded elevation data in 0.000698 sec
2022-10-19 13:39:13 Error: Vertical datum out of range
2022-10-19 13:39:13 processing failed in 1.09217 sec


Dense cloud was built in NAD83(2011), and a workaround is to build the DEM in UTM17N/ellipsoid height, and export in UTM17N+NAVD88, but it breaks some of my scripts and is irritating.

Should I only have the new Geoid file installed? Seems like it might be related to the issue discussed earlier re 1.7.1?


39
Bug Reports / Re: Installation issue on Linux (Rocky or Centos 8)
« on: October 07, 2022, 03:08:14 AM »
Is 1.8.4 still not compatible with CentOS 7?
Figured I'd update this to say that we have Metashape 1.8.4 running on CentOS 7.6 for network processing

40
General / optimizing co-alignment from very different cameras/views
« on: May 18, 2022, 10:25:59 PM »
We're exploring the ability of a marine survey platform we built to provide "baseline" datasets that can be compared with later small high-res collections to evaluate habitat change. In that vein, we've been working to co-register large area survey imagery with handheld camera imagery of a small portion of the survey area from a very different viewing distance (and time, and resolution, and lighting, etc). Both sets have good overlap, and align well independently. The high-res set has >90% overlap.

We tried many techniques and parameters (resampling, color correction, guided image matching, manual tie points, injecting camera position/orientation) with the full overlap region for both sets of images (400 high-res close-in images and 100 images with about 10x lower resolution), and failed to get any matches (valid or invalid) between image sets.

As a last ditch effort we zoomed in on a small area with 10 and 11 images respectively and were able to get a couple dozen valid matches across the two cameras. Interestingly, for the full-res close-in images, guided image matching produced no cross-set matches, while for 50% scale resampled high-res images, guided matching produced more cross-set matches than unguided.

We're trying to understand how/if we can scale up to the full shared region and preserve the ability to generate cross-camera matches, and why we were able to get these matches. One theory is that our "matching budget" is being used up by within-set matches, and if we minimize within-set overlap we may be able to increase our cross-set matches. (within-set = matches for the same camera, and cross-set = matches between low-res long-range camera and high-res close-range camera).

Wondering if anyone, and especially Alexey or other metashape devs has insights into what parameters control our ability to find matches ACROSS camera groups vs within them. On a related note, I'm hoping to develop a couple python scripts that report within-camera-group vs across-camera-group valid and invalid tie points. Not sure if the API fully supports that, but if anyone else is working with similar problems I'd love to start a discussion.

41
We're trying to make metashape work well in a centOS HPC network processing environment.

We are running into a situation with collisions between users/ports when multiple instances of the server are running. If a single server instance is running, then (as far as I know) there's no way to control which users have access to which processing nodes (allocated per-user/job with SLURM scripts.

It would be nice if we could start a server instance if any user launches metashape, and the server could be shared by any other users, but restrict users to only using worker nodes allocated to them.

42
Hi Paulo,

Thanks for taking the time to look at my log, and good catch! I didn't catch that, and it gives me a good idea on where to look in the future.

The lens is fixed focal length - Voigtlander Ultron 40mm f2 SL II.

I wonder if the oblique (~45°) camera angle is a factor in that wild initial estimate - I periodically put in an estimated capture distance to try to help the reference/source preselection (we have PPP positions on all/most of the images), but from reading the manual, and from conversations with Alexey, I don't think that really helps unless I also have YPR, which I don't.

Also this is a good reminder to initialize with the adjusted values from the previous flight. That probably would've helped.

Andy

43
I had an odd thing happen on my first alignment w/ 1.8.1 -

All but 54 (of ~2000) cameras aligned, and the unaligned cameras were part of a double pass on a shoreline with good features, good GPS positions (PPP with camera arms input), and no distinguishable difference from their neighbors, right in the middle of a gently curving shoreline.

Since I kept the keypoints, I was able to select only the unaligned images and "align selected" and they aligned perfectly with the others in just a few minutes, but the apparently random dropout makes it hard for me to trust an alignment workflow in any automated fashion.

Align settings were (from dropdown, not batch) - Accuracy High, Generic Preselection, Reference Preselection/Source, 60,000 keypoints, 0 tie points exclude stationary tie points checked, guided image matching off,  Adaptive fitting off.

I attached the logfile (zipped because size) for the alignment in case it's useful.

Camera is Nikon D810, images are JPGs derived from RAW with Adobe Camera RAW at full quality, no chroma subsampling, Adobe Camera Standard color profile and no other tweaks. I have not seen this issue in 1.6.9 or 1.7.5 (didn't play with other intermediate versions).

/andy

44
Bug Reports / Re: corrupt .laz dense_cloud export? (1.6.5 linux)
« on: December 15, 2021, 12:00:39 AM »
Update to anyone who needs to fix this issue with LAZ files (Thanks Alexey for your help):

According to Alexey (PM), the "Offset to Point Data" value in the LAS header is set to 148 bytes larger than expected when promoting 1.2 LAS format file to 1.4 format version for point clouds larger than 4 billion points. Alexey suggested editing the value with a hex editor. My solution was a little easier. I used lasinfo with the -set_offset_to_point_data switch to adjust the value.  In my version of lasinfo things were slightly complicated because it was not setting the offset correctly (offset was lower than specified by 106 bytes). Your results may differ (e.g. different lasinfo version or .las instead of .laz) but the same general method should work.Steps as follows:

first query the file(s) for existing value:

lasinfo -i *.laz -no_check -no_vlrs -stdout |findstr /C:"offset to point data"

  offset to point data:       1388
  offset to point data:       1536
  offset to point data:       1536
  offset to point data:       1536
  offset to point data:       1536
  offset to point data:       1536
  offset to point data:       1536

next set new value:

lasinfo -i *.laz -no_check -no_vlrs -set_offset_to_point_data <old value - 148 + 106>

in my case the old value was 1536 (for all but one file - so I changed that one separately) so the command was this:

lasinfo -i *.laz -no_check -no_vlrs -set_offset_to_point_data 1494

now check to see if it's correct:

lasinfo -i *.laz -no_check -no_vlrs -stdout |findstr /C:"offset to point data"

  offset to point data:       1388
  offset to point data:       1388
  offset to point data:       1388
  offset to point data:       1388
  offset to point data:       1388
  offset to point data:       1388
  offset to point data:       1388

if not, figure out what you need to substitute for 106. If you get error (lasreader error) try setting it back to the original value. When I tried setting mine to 1536 again it reported 1430 which is where I realized I was getting a 106-byte offset from what I specified (maybe because .laz instead of .las?)

Anyway, here's the fix for posterity.

45
Bug Reports / Re: corrupt .laz dense_cloud export? (1.6.5 linux)
« on: December 09, 2021, 01:53:58 AM »
Thank you Alexey! PM sent.

Pages: 1 2 [3] 4 5 ... 30