Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - andyroo

Pages: 1 ... 3 4 [5] 6 7 ... 12
61
I can get a lot of useful comparison metrics from the chunk info, and right now I have to manually right-click and mouse into the window and click before I can hit ctrl-A to select all, then Ctrl-c to copy, then alt-tab to switch to calc and ctrl-v to paste and several more keyboard only commands to extract the right column and copy-paste again, transpose, and go down a column for next row of input.

I could script all of this with keyboard-only and iterate through a project with dozens of chunks (like the one I'm doing painfully by hand and wrist now) if I had one hotkey that worked to pull up info on the highlighted (gray background) chunk, not the active bolded chunk. Also it would need to shift the active focus to that window so a ctrl-A would select all.

If I had the ability to set a custom hotkey for chunk info, it would be most wonderful.

62
General / Fine-level task subdivision performance
« on: May 01, 2020, 01:06:30 AM »
Has anyone benchmarked a good-sized project with fine-level task subdivision enabled vs disabled? Using 1.6.2 I'm just curious if it speeds up or slows down processing if you're otherwise not limited by RAM. Running on AMD 3960x 256GB RAM 2 2080 Super GPUs. Could try re-running same ~7k image project if nobody has an answer.

63
Feature Requests / reduce bit depth in ortho generation (or export)
« on: January 14, 2020, 09:59:42 PM »
It would be nice to have an option to reduce bit depth for ortho generation. We are using 16-bit color for alignment, but don't need that color depth for orthoimagery. Image size is dramatically increased with 16-bit color and platform compatibility is also reduced. Also the internal metashape ortho files are huge compared to 8-bit and we aren't able to specify a minimum ortho resolution until the export stage so unnecessary HD space is used.

[edit] one way to do this (wouldn't address the issue of "build ortho" size but would enable 8-bit export) would be to add a colors_rgb_8bit parameter to the exportOrthophotos method, like has been added to exportModel and exportPoints in the API.

64
I stopped a tiled dense cloud export this AM (3 billion points) that had created ~10 thousand 1km x 1km tiles on a ~500km x ~500m section of coast that's sort of v-shaped. It was 16 hours in and reported >16 hours to go.

Exporting the entire cloud as a single file took ~34 minutes. It looks like maybe there's a "blind" tiling of the region before anything else that generated the 10 thousand tiles when it should only be 500 - 1000 tiles at most? I'm not sure why everything slowed down so much, but wanted to report. Happy to provide more info if it helps.

Andy

65
I'm building out a 10G network to start network processing with our several (Windows x64) metashape machines, and wondering if there's any disadvantage to having the SERVER node be a computer that hosts the shared STORAGE for network processing.

I noticed that in the "How to configure network processing" guidance, Agisoft says that the SERVER doesn't need to be a powerful machine and the shared STORAGE should be accessible to all machines.

I was going to add a SSD NAS to the network for the shared storage, but then I realized I could just put a SSD RAID card in my SERVER computer and save a switch port (maybe for another worker node :-) as long as the SSD-network file xfer bandwidth wouldn't interfere with the server commands.

Any thoughts on if there are disadvantages to this?

66
I notice on a ~45k image project that during dense reconstruction the initial processing runs through chunks of ~235 images at a time.

For each step, the processing loop usually spends about 1/3 of the time (per chunk) "loading images"  with relatively low CPU and disk usage (CPU in bursts of up to about 35%, disk in  9-10 "sawtooth" bursts per minute - see screenshots).

Then the other 1/2-3/4 of the time things proceed as I would expect "estimating disparity" with CPU and GPU at pretty much maximum, minimal (I think no) disk access.

I'm wondering what the bottleneck is during the 1/3 of the time I'm loading images, since Metashape is so good for most of the workflow at maximizing at least read access if not CPU or CPU+GPU.

I didn't compare TIF/JPG performance with DNG yet, and wondering if there's a difference in what image format I use or bit depth the project is processed (current chunk is processed as DNG from ARW) or if I could make one by changing up hardware (SSD or M2 or RAMDISK) or other system settings. I am suspicious that storage media would make a difference since HDD access is so sporadic, and I was thinking maybe there's an issue with file index optimization that's slowing things down.

The one reason I think file indexing might play a role is that I noticed that if I disable photos in a chunk with 45K images it takes about 2 seconds, but if I delete photos from those chunks instead (trying to make a better organized/more efficient project structure) then it takes 5-10 minutes in this project.

67
I'm processing a dataset with about 45k images and I notice that the gradual selection operations take several hours per iteration. My first Reconstruction uncertainty filter took about 8 hrs on a sparse cloud with 172M points, and it looks like most of that time there is only a single CPU core working.

Wondering if there's a way to parallelize the gradual selection tools so they use all available cores, or even offload to the GPU.

Andy

68
I got the dreaded bad license error on Metashape 1.5.3 after some messing around with hardware. In more detail:

Just shipped a computer to the other coast and it got beat up pretty bad so I lost some RAM (actually a channel on the MOBO). Also pulled all the HDDs except C:\ (Metashape installed on C:\). Anyway, finally got 'er up and running with three new HDDs and launched Metashape to test the GPUs and got the "No license found.
Details: Wrong host for license (-4)" error.

Attempting to generate a new license file for offline activation gave me the error about the host lic. already being on the machine and prompted to delete it. Thought you might want to see it so I didn't.

Attempting to license online gave me the following error:

"unexpected error occurred: can't use activated license. Please contact support@agisoft.com."


69
Just updated three machines to Windows build 1903 and metashape 1.5.3

Can confirm that on two of them, when clicking photo in the thumbnail pane it crashes metashape. colleague reported error earlier today with DNG files, and I submitted a crash report maybe 20 min ago (but after processing).

Was able to duplicate just now by launching metashape, adding one JPG, then double-clicking on it. Will include this thread with crash report.

70
I have a problem that I think is related to the chunk transform matrix  orientation being tangential to a part of a 200km long fairly linear survey (elliptical hghts). I am iterating through a shapefile setting the region to shapevile bounds with an arbitrary z range (thanks to other posts on region manipulation in the forum).

I can successfully set XY to GCS but Z-axis seems like it's tangential to the overall pointcloud, so my z bounds work for some parts of the survey (generally in the middle) but end up above the ground plane at the end, and I can't figure out how to refine z orientation and properly (un)project on a per-region basis. Here is my code (although mostly it's code from other people I hacked together, and I think much of it contributed by Alexey in some form, thanks again).

Code: [Select]
import Metashape
import math
from pathlib import Path

#Set script variables below

#INPUT SHAPEFILE
in_shp = "input.shp"

#interval to step through shapefile (every nth shape) and produce dense clouds
shpStep = 20

#minimum elevation of region (meters)
zmin = -50

#max elevation of region (meters)
zmax = 1000

#initialize Metashape variables
doc = Metashape.app.document
chunk = doc.chunk
region = chunk.region
T = chunk.transform.matrix
S = chunk.transform.scale
m = Metashape.Vector([10E+10, 10E+10, 10E+10])
M = -m

#First rotate the region to the dataframe coordinate system
v_t = T * Metashape.Vector( [0,0,0,1] )
v_t.size = 3
R = chunk.crs.localframe(v_t) * T
region.rot = R.rotation().t()
chunk.region = region

#import shapes
chunk.importShapes(in_shp)

#Now iterate through selected shapes, reshape region, make dense cloud, export dense cloud
#before we get started, define a z range list from zmin and zmax
zrange = [zmin, zmax]

for shape in chunk.shapes.shapes[::shpStep]:
        print('changing region to',shape.attributes['Name'])
        #(re)initialize m and -m to really big and really small values for size and center calcs for each poly
        m = Metashape.Vector([10E+10, 10E+10, 10E+10])
        M = -m
        for vertex in shape.vertices:
            #add z vals to xy from 2D shapefile and iterate through those too
            for z in zrange:
                coord = vertex
                coord.size = 3
                coord[2] = z
                coord = chunk.crs.unproject(coord)
                print(coord)
                for i in range(3):
                    m[i] = min(m[i], coord[i])
                    M[i] = max(M[i], coord[i])

        #calculate center and size
        center = (M + m) / 2
        size = M - m

        #Apply to the region
        region.center = T.inv().mulp(center)
        region.size = size * (1/S)
        #chunk.region = region
        print('region changed to',shape.attributes['Name'])

I think my problem is related to the region rotation being relative to the chunk, so the z axis is tangential to some part of the survey, but when I try the following it doesn't seem to work either:

Code: [Select]
R = chunk.crs.localframe(center) * T
region.rot = R.rotation().t()
chunk.region = region

Any advice is appreciated.

71
I am generating point cloud tiles to compare SfM with lidar. To do this, I iterate through the lidar tile index shapefile (every nth tile) and set the region to that tile shape, then generate a dense cloud.

Sometimes on a tile with only a portion (less than half) of the tile occupied by sparse cloud points, Metashape will not select any cameras to build the depth maps or dense cloud, even though there are hundreds of sparse tie points occupying, for example, 1/3 of the tile.


My bad, I didn't realize that z values for the bottom of the region were above the pointcloud.

72
I found the very clear FAQ article "How to use height above geoid for the coordinate system" that explains how Agisoft expects users to use geoid/ellipsoid values.

But this article only talks about using geoids with input coordinates. What is the best way (or is there a way) to work with ellipsoid heights in your project, then apply the geoid to the exported products? If this is not done easily, I would like to add it as a feature request.

For example, I just completed a project in NAD83(2011) (ellipsoid heights) where I generated a dense cloud in geographic coordinates and DEMs using projected (UTM 10) coordinates, both with ellipsoid heights. It looks like if I want NAVD88 elevations I can export the dense cloud in NAD83(2011)+NAVD88 geographic, then convert to UTM outside of Metashape. Similarly, I can export the DEM with ellipsoid heights in NAD83(2011) UTM 10N, then convert to geoid heights outside of Metashape.

But it seems like, since all I am doing is applying the geoid to the elevation values, I should be able to do both within Photoscan, and simply select NAD83(2011) + NAVD88 UTM 10N on export.

My thought on this workflow is that using geoid heights while processing introduces another layer of uncertainty/error, so it would be best practice to only apply the geoid on exporting a final product, rather than while processing in Metashape. If this is true, it seems like it would be better to have the option of converting "final" ellipsoid heights to geoid heights, than to convert your whole project to geoid heights at the beginning.

73
In complex terrain I often have difficulty seeing and isolating points I want to edit, and if I am using other tools like select points by color or misc classify tools, if I use settings that get all the surf noise out, it often edits out parts of the land I want to keep.

I really like how PhotoScan edit tools are evolving, but if I were able to select a part of the dense cloud (like a section of shoreline or deep part of a river with high/low noise) - then isolate it so everything else disappeared (sort of like Adobe illustrator's "isolation mode", then many of these tools would be more useful.

For example, I could run noise classification on only that isolated part, or manually edit only that part by turning it on edge and trimming the noise, or select the dark blue/white parts of deep water/surf for only that part.

I think it would be especially useful for speeding up manual edits while avoiding accidentally clipping areas of the dense cloud that are hidden/far away because of my view.

Andy

74
.....So For Photo-invariant parameters option would it not be better to use instead Photo-variant parameters as in this case a different parameter will be estimated for each photo?

I agree with Paulo's comment in the pre-release topic - I had to dig into the forum to clarify what this was because it is not explained in the manual, and in the version of English I learned, "invariant" means "unchanging" so I was worried that with "Photo-invariant parameters = none" Metashape was letting all enabled parameters vary for all photos.

My suggestion would be either what Paulo said - change the label to "Photo-variant parameters" or maybe something using a slightly more explicit wording, like "Per-photo variable parameters"

This is an interesting setting. Seems like it could be useful for collections where autofocus or image stabilization or shutter priority was enabled...

75
I reprocess the DEMs output from PhotoScan (GeoTiff) with GDAL_translate to shrink them significantly after exporting from PhotoScan. I know that MATLAB and some other software doesn't like LZW-compressed TIFFs, but honestly that's lazy programming and irritates me, since LZW (lossless) tiff compression has been around for 20+ years and the patent expired 16 years ago.

It would be nice if some of the GDAL tiff creation options were exposed (like -co COMPRESS=LZW -co PREDICTOR=2)

my default is to use compress=2 but compress=3 is sometimes better (40-50% reduction in file size), and even using no predictor, I generally get ~25% reduction in file size:

Code: [Select]
gdal_translate [Photoscan_DEM].tif [Andy_DEM].tif -co "COMPRESS=LZW" -co "PREDICTOR=2" -co "TILED=YES" -co "BLOCKXSIZE=256" -co "BLOCKYSIZE=256
For example For a 1m DEM of a 30 sq km strip along a river, original output from Photoscan is 284MB. If I run gdal_translate to losslessly compress with LZW (tiled, 256x256 block size, as per snippet above) I get:

211MB -co "COMPRESS=LZW"
211MB -co "COMPRESS=LZW" -co "PREDICTOR=1"
174MB -co "COMPRESS=LZW" -co "PREDICTOR=2"
162MB -co "COMPRESS=LZW" -co "PREDICTOR=3"


Also I'm curious if the JPEG tiles in tiff export are in YCbCr colorspace. My workflow currently exports ortho tiles in LZW then uses GDAL to tile them into a nice efficient ortho for GIS. I see that Metashape has gradually added some of the dials and levers I use, but it's not clear that they are all there yet.

This is what I do after exporting ortho tiles as LZW from Metashape:

Code: [Select]
#From tiles build VRT
gdalbuildvrt -srcnodata 0 -vrtnodata 0 [tileset_VRT].vrt [tileset_root]*.tif
#translate VRT into a TIFF of JPEG tiles that are optimized in YCbCr colorspace
gdal_translate -of Gtiff -co "COMPRESS=JPEG" -co "JPEG_QUALITY=90" -co "TILED=YES" -co "PHOTOMETRIC=YCBCR" -co BLOCKYSIZE=256 -co BLOCKXSIZE=256 -co "TFW=YES" -b 1 -b 2 -b 3 [tileset_VRT].vrt [Final_GeoTiff].tif

This makes nice orthos that aren't ridiculously huge and load quickly in GIS. If I want to pre-build overviews I do this:

Code: [Select]
gdaladdo -r gauss -ro [Final_GeoTiff].tif 2 4 8 16 32 64 128 256 --config COMPRESS_OVERVIEW JPEG --config USE_RRD NO --config JPEG_QUALITY 90 --config TILED YES --config PHOTOMETRIC_OVERVIEW YCBCR





Pages: 1 ... 3 4 [5] 6 7 ... 12