1
Python and Java API / How does API determine region size?
« on: March 29, 2021, 09:24:01 PM »
I have a python script that looks like this. I am running 1.6.5
1. Create Metashape.Document()
2. Load photos, import reference:
3. Align photos
4. Build mesh
4. Build Ortho
I've left some out for clarity, but this summarized what I'm doing. Up until now this has always worked well for me. Recently I noticed though, that the region and corresponding mesh it's generating is smaller than my area. Sometimes it cuts off just one or two photos at the edge, sometimes almost half of my data. I can go in and manually resize the region, but I don't understand where in the code it is deciding to use such a small region.
My code iterates through several chunks like this. Other than when building my orthomosaic, I never explicitly set any region bounds.
1. Create Metashape.Document()
2. Load photos, import reference:
Code: [Select]
curChunk = doc.addChunk()
curChunk.addPhotos(newCameras,strip_extensions=False)
curChunk.crs = Metashape.CoordinateSystem("EPSG::4326")
curChunk.importReference(locRefFile,Metashape.ReferenceFormatCSV,columns=myColumns,delimiter=', ',skip_rows=1)
curChunk.camera_location_accuracy = myAccuracyVector
curCunk.camera_rotation_accuracy=myHeadingAccuracyVector
3. Align photos
Code: [Select]
curChunk.matchPhotos(downscale=1,generic_preselection=False)
curChunk.alignCameras(adaptive_fittin=True)
4. Build mesh
Code: [Select]
curChunk.buildModel(surface_type=Metashape.HeightField,
interpolation = Metashape.EnabledInterpolation,
face_count=Metashape.MediumFaceCount,
source_date=Metashape.DataSource.PointCloudDate,
vertex_colors=False)
4. Build Ortho
Code: [Select]
curChunk.buildOrthomosaic(surface_data=Metashape.ModelData,
blending_mode=Metashape.MosaicBlending,
region=Metashape.BBox(Metashape.Vector([b0,b1]),Metashape.Vector([b2,b3]),
projection=myUTMProjection)
I've left some out for clarity, but this summarized what I'm doing. Up until now this has always worked well for me. Recently I noticed though, that the region and corresponding mesh it's generating is smaller than my area. Sometimes it cuts off just one or two photos at the edge, sometimes almost half of my data. I can go in and manually resize the region, but I don't understand where in the code it is deciding to use such a small region.
My code iterates through several chunks like this. Other than when building my orthomosaic, I never explicitly set any region bounds.