Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - ashalota

Pages: 1 2 [3]
31
General / Most efficient way to ensure High Accuracy alignment?
« on: January 15, 2020, 08:15:35 PM »
I have a set of 20-40 aerial photos (in a line, ~90% overlap), with excellent GPS data.

I have run the alignment > mesh > orthomosaic workflow on them several times.
Settings are "High Accuracy" alignment > "Low Face Count" mesh > orthomosaic.

I am trying to figure out the most efficient way to get a high accuracy orthomosaic. I'm comparing my results to google maps to check accuracy (I don't know if they are perfect either, but so far it's been a good reference point).

At the moment, if I use only the first 20 photos, my total Chunk Error is showing as ~7m. I have tried rerunning alignment several times and still it sticks around there, even with "Highest" accuracy marked. When I look at the orthomosaic, it is clear that along a coastline that is in the image, it is not doing that good of a job aligning.

However, if I align all 40 photos, and then export the orthomosaic for the area of the first 20, the orthomosaic looks great and aligns perfectly on the coastline. It also has a total Chunk Error of ~0.6m.

I thought this might mean including a slightly larger area would be good for alignment (despite increased processing time), but later on in my dataset it actually produces worse results if I include more points for alignment that aren't quite in my reference chunk.


Do you have any recommendation for an efficient process to get high accuracy orthomosaics???
1) Should I be including additional cameras that are just outside of my orthomosaic bounds to improve the quality of the orthomosaic?
2) Is a low face count mesh the right choice for an orthomosaic, or should I adjust this?
3) Are there any other default settings I should be editing to increase the quality of the orthomosaic output?




32
I'm trying to create an orthomosaic of a long transect of 5000 photos.

After several failures, my current plan is to:
1) Align all photos in one chunk
2) Split into ~200 chunks
3) Create meshes
4) Combine chunks
5) Create orthomosaic

I looked at the split into chunks script (https://github.com/agisoft-llc/metashape-scripts/blob/master/src/split_in_chunks_dialog.py) but it isn't working out for me since it seems that it copies all the cameras every time. Running chunk.copy on 5000 photos with the alignment points as well is taking very long.

Does anyone know a work around where I can copy only my ~20 photos at a time, but also copy the alignment/key points with them into the new chunk?

33
General / Bad Stitching - What causes this?
« on: December 27, 2019, 08:31:12 PM »
I have a set of aerial photographs taken in long lines. Over 80% overlap along the lines. Roughly 5000 photos. The GPS data is great, and includes lat/lon, altitude, roll, pitch, yaw.

I'm experimenting with different methods to get an orthomosaic for the entire set, but it just isn't working out. Here are the current steps I take:

1. Load the photos into chunks, with each chunk containing a group of ~ 20 overlapping photos in a contiguous area along my flight line. There is overlap in the photos between the different chunks, so several photos at the edges will appear in two chunks.

2. For each chunk:
  - chunk.matchPhotos(generic_preselection=False)
  - chunk.alignCameras()
  - After some testing, I decided at this point to do some additional things that may or may not be helping things...:
    -  If after aligning the chunk, the chunk is aligned, but a section of at least 4 images in a row (which due to my naming means they would be near each other) is not aligned (says 'NA' in the gui), I would move those photos out of that chunk and into a new chunk. Then I align those new chunks, and do the same if needed, so that wherever possible I don't have large groups of unaligned photos. This was supposed to get as many photos aligned as possible...

3. For each chunk/new partial chunk:
  - Check the total error of the chunk by using the math described here: https://www.agisoft.com/forum/index.php?topic=5748.0
  - If the total chunk error is less than 10m, accept it, if not try again (a few times) to align that chunk, before giving up, and ignoring that bad chunk in the future.
    - I couldn't decide how to pick my error threshold, although usually I do not hit the 10m much smaller, like < 0.5

4. Merge good chunks:
  - doc.mergeChunks(goodChunks,merge_markers=True)
  - mergedChunk = doc.chunks[len(doc.chunks)-1]

5. Build mesh:
  - mergedChunk.buildModel(surface=Metashape.Arbitrary, interpolation=Metashape.EnableInterpolation,face_count=Metashape.LowFaceCount)

6. Build orthomosaic:
  - mergedChunk.buildOrthomosaic(surface=Metashape.ModelData,blending=Metashape.MosaicBlending,projection=Metashape.CoordinateSystem("EPSG::4326"))

My reasoning for splitting it up into the chunks was that because I had such a long varying flightline, when I tried to run it all in one model the results were no good, and I thought that might be because the program was creating a model over a long area (?) . Sometimes, this method described above has produced good results, but not consistently enough. If I ever have an area with a LOT of overlap (when the flight is more of a grid pattern instead of a line), the results are perfect.

Some of the "bad" results are like these:

You can see the roads completely warped in one, and the rim of a mountain disconnected in another.




34
I've written the following function:

Code: [Select]
def sortChunks(doc):
    """Sorts chunks in alphabetical order by title. Chunks beginning with "Merged" will be placed at the bottom."""

    chunkDict = {}
    mergeDict = {}
    for chunk in doc.chunks:
        if chunk.label.startswith('Merged'):
            mergeDict[chunk.label] = chunk
        else:
            chunkDict[chunk.label] = chunk
   
    labels = sorted(chunkDict.keys())
    mergeLabels = sorted(mergeDict.keys())
   
    newChunks = []
    for label in labels:
        newChunks.append(chunkDict[label])
    for label in mergeLabels:
        newChunks.append(mergeDict[label])
   
    for i in range(len(doc.chunks)-1):
        print(doc.chunks[i].label + ' -> ' + newChunks[i].label)
        doc.chunks[i] = newChunks[i]  #NO EFFECT!
    doc.save()

Unfortunately the assignment

Code: [Select]
doc.chunks[i] = newChunks[i]

has no effect. Is there any way to reorder chunks in a script?

35
I'm creating an orthomosaic of roughly 7000 images, with 90% overlap at most points.
In the past, I've run into issues when running all images as one chunk (especially when they were in long single-image-width lines), so I decided to split them into smaller chunks (in a format to match the rest of my data, usually 50-200 images per chunk).

I split the images, with some overlap for those that intersected several of these chunk boundaries I had decided on. Align images, create low surface mesh, then merge all. In the past I have had the best/quickest results with a low face count mesh as the input for ortho photo.

When I transfer my models, attachment 1 is what the mesh looks like (you can practically see the outline of my chunks in it).
If I recreate a low surface mesh in my merged chunk, the result is image 2 (nm_new_mesh).

Before I continue with the process of creating an orthomosaic, I'd like to know:

1) Of the two meshes, is any one more suited for creating the orthomosaic?
2) Is splitting into chunks totally uneccessary? Will it help or hurt my output?



36
General / What graphics card to buy for Linux CentOS?
« on: March 14, 2019, 05:43:20 PM »
How should I decide what graphics card to buy in order to use GPU acceleration with Metashape on Linux CentOS? I know the manual says there are no guarantees beyond the list of cards used in Windows, but if anyone has experience with others or recommendations for a good one to try....

I am processing aerial survey photos in long tracks and frequently taking at least 20 hours for alignment and mosaicking (1000+ images), would really like to speed this process up.



The otuput of lsb_release -a :

LSB Version:   :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID:   CentOS
Description:   CentOS Linux release 7.6.1810 (Core)
Release:   7.6.1810
Codename:   Core



The output of lscpu:

rchitecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                72
On-line CPU(s) list:   0-71
Thread(s) per core:    2
Core(s) per socket:    18
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
Stepping:              1
CPU MHz:               1200.988
CPU max MHz:           3300.0000
CPU min MHz:           1200.0000
BogoMIPS:              4200.12
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              46080K
NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70
NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts


37
Bug Reports / Export Orthomosaic TIF with JPG compression no alpha channel
« on: December 04, 2018, 02:00:18 AM »
https://drive.google.com/file/d/1Ih0IQCvB-r3RXqqf-LOGpV-jG6gpdvic/view?usp=sharing

I'm trying to export an orthomosaic. I am using photoscan 1.4.1. The method I am using is the same that previously worked for me on 1.3, but now I am having issues.

When I use the settings shown in that image (export as .tif with jpg compression), the alpha channel is not created in my output .tif files. If I change it to lzw compression, I do get the alpha channel.

Is this expected behavior, or a bug?

38
I am receiving the following error when I run the program. What does it mean?

[user@machine ~]$ numactl --interleave=all /usr/local/photoscan-pro_1_3_1/photoscan.sh&
[3] 134710
[2]   Exit 1                  numactl --interleave=all agisoft
[user@machine ~]$ failed to get the current screen resources
The license has expired.
WARNING: Application calling GLX 1.3 function "glXCreatePbuffer" when GLX 1.3 is not supported!  This is an application bug!
QXcbConnection: XCB error: 172 (Unknown), sequence: 169, resource id: 140, major code: 149 (Unknown), minor code: 20
QXcbConnection: XCB error: 1 (BadRequest), sequence: 626, resource id: 140, major code: 149 (Unknown), minor code: 25

39
General / Is GPU required to run 'align photos' quickly?
« on: September 11, 2017, 06:20:21 PM »
I am running align photos on a 200 photo chunk on a linux machine with 750gb memory.
It is taking 90 seconds/photo.

I noticed it does not have a GPU showing under the GPU tab in 1.3.1. Is this required to make align photos run more quickly? On my laptop, which is much less powerful but has a gpu, I am able to process the same set of images with about 3 seconds/photo.

40
General / Resume Build Orthomosaic for crashed operation (1.3.2.) ?
« on: May 26, 2017, 06:06:43 AM »
Is it possible to resume an operation that crashed? My connection to the photos I was orthomosaicing timed out in the middle of a 4 hour process and I don't want to start all over.

Here's some excerpts from the log, it's clear many parts of the operation completed without any issue:

2017-05-25 16:48:45 BuildOrthomosaic: projection type = Geographic, projection = WGS 84, surface = Mesh, blending mode = Mosaic, pixel size = 2.90036e-007 x 2.76802e-007
2017-05-25 16:48:45 Analyzing mesh...
2017-05-25 16:48:45 tessellating mesh... done (4068 -> 4074 faces)
2017-05-25 16:48:45 generating 300888x400178 orthophoto (13 levels, 0 resolution)
2017-05-25 16:48:45 selected 119 cameras
2017-05-25 16:48:45 Orthorectifying images...
2017-05-25 16:48:45 Orthorectifying 119 images
2017-05-25 16:49:21 5716.jpg: 11569x10298 -> 10665x4209
2017-05-25 16:49:50 5717.jpg: 13463x16403 -> 11266x9009
2017-05-25 16:49:52 5718.jpg: 14066x20460 -> 11367x8896
[...]
2017-05-25 19:34:05 Finished orthorectification in 9919.83 sec
2017-05-25 19:34:06 selected 397 tiles
2017-05-25 19:34:06 selected 397 tiles
2017-05-25 19:34:06 Updating partition...
2017-05-25 19:34:06 2 of 2 processed in 0.877 sec
2017-05-25 19:34:10 4 of 4 processed in 2.876 sec
[...]
2017-05-25 19:52:40 partition updated in 1114.38 sec
2017-05-25 19:52:40 selected 397 tiles
2017-05-25 19:52:40 Updating orthomosaic...
2017-05-25 19:52:44 loaded partition in 4.399 sec
2017-05-25 19:52:44 boundaries extracted in 0.05 sec
2017-05-25 19:52:45 0 images blended in 0.991 sec
2017-05-25 19:52:51 loaded partition in 3.933 sec
2017-05-25 19:52:51 boundaries extracted in 0.061 sec
2017-05-25 19:52:59 3 images blended in 7.766 sec
[...]
2017-05-25 20:47:52 Finished processing in 14346.5 sec (exit code 0)
2017-05-25 20:47:52 Error: Can't open tile: G:/path/to/myproject/myproject.files/0/0/orthomosaic/tile-9-85.tif

41
I have a 2000 photo project, split into chunks with 200 pictures each. I thought that would be a fairly reasonable size.
I have associated reference data for lat, lon, ellipsoid height (altitude), roll, pitch, yaw.

When I run Align Cameras with High Accuracy, key/tie point limit at 40k/4k, I can't figure out the pattern at all for processing speed. I think I had it run fairly quick once or twice, but now it is taking 5 hours for each chunk. I am currently trying to run it with pair preselection set to "reference", since I have all the information and I assumed that would speed it up a lot but it does no such thing.

This is the stats I see for my machine when I run "free -g":
              total        used        free      shared  buff/cache   available
Mem:            755          11           5           0         739         741
Swap:             1           0           1

So I think that means I have 755GB of memory, of which almost all of it is available?

This is the stats for my photoscan process when I run "ps au --sort -rss":

USER        PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
myUser   55289 3514  0.6 10410328 5377308 pts/5 Sl  13:05 2911:41 /usr/local/photoscan.sh

Do I need to change a setting in photoscan to use the memory better? I'm really not familiar with pc hardware at all so I'm drawing a blank on what I should be doing to make the program run faster.

42
General / Export Orthomosaic as JPG - No location data?
« on: May 19, 2017, 05:29:30 PM »
I am using Photoscan 1.2.

When I export an orthomosaic as TIFF (any compression), I am able to load the resulting files into a GIS with location data perfectly.

However, if I try with JPG, there is no location data associated. I also see there is no location data in the exifdata for the JPG either.

I can use the "create KML" option to get a kml for each jpg image which I can then use to locate the JPG, but I really want to have the data included with the JPG as soon as I export.

Is this possible? What am I missing here??

I want to use JPG format because the file size is so much smaller and still looks great compared to even the JPG-compressed TIFF files.

Pages: 1 2 [3]