Forum

Recent Posts

Pages: 1 2 [3] 4 5 ... 10
21
Thanks Paulo,

I'm trying to build it up bit by bit so I understand.

I can access the cameras via chunk.cameras

But when I run the following I get:
   
    TypeError: argument 1 must be Metashape.Metashape.Vector, not None

Code: [Select]
for camera in chunk.cameras:
    error = chunk.transform.matrix.mulp(camera.center) - chunk.crs.unproject(camera.reference.location)
22
Hi spatial digger,

look at following script from Agisoft Github https://github.com/agisoft-llc/metashape-scripts/blob/master/src/save_estimated_reference.py. It will save to a text file for each camera its reference source location, estimated location, error and sigma as well as reference source rotation, estimated rotation, error and sigma....
as in (where XYZ is location info and YPR rotation info in yaw, pitch, roll format):
Code: [Select]
IMG_6256.JPG
       XYZ source: 350457.283156 2852834.941277 760.147899
       XYZ error: -0.501244 0.393616 0.685989
       XYZ estimated: 350456.782032 2852835.334798 760.833885
       XYZ sigma: 0.014008 0.014108 0.019027
       YPR source: 77.144 6.770 0.814
       YPR error: 5.127 1.539 -2.952
       YPR estimated: 82.271 8.309 -2.138
       YPR sigma: 0.001 0.004 0.004
IMG_6257.JPG
       XYZ source: 350505.734175 2852832.854006 759.132261
       XYZ error: 0.582248 -0.318101 1.874715
       XYZ estimated: 350506.316283 2852832.535981 761.006984
       XYZ sigma: 0.013379 0.013864 0.018723
       YPR source: 96.045 4.840 -7.320
       YPR error: -0.441 2.846 7.053
       YPR estimated: 95.604 7.686 -0.267
       YPR sigma: 0.001 0.004 0.005
...
This should get you a good start.... of course this is a more elaborate script developed to help understand how different items are calculated...

if you just want to export to some txt file the camera id, estimated location and rotation just use the chunk.exportReference(....) as in attached screen copy .... consult API reference manual for details on parameters used...
23
I have an external scripted workflow which does the usual processes (adds photos, aligns, import target coords, through to dense point cloud, code included for reference)

I would like to access the individual image/photo name and the calculated camera xyz location and any other directional information that exists so this can be brought into a pandas dataframe.

Any idea where to start?

Code: [Select]
    doc = Metashape.Document()

    # set up a chunk
    chunk = doc.addChunk()

    # add photos
    chunk.addPhotos(filenames=photo_list)

    print(str(len(photo_list)) + " Added")

    # detect markers
    chunk.detectMarkers()
    # assign crs
    chunk.crs = Metashape.CoordinateSystem("EPSG::27700")

    count_photos = len(image_list)

    # import target coords
    chunk.importReference(path=targets_path, format=Metashape.ReferenceFormatCSV,
                        columns='nxyz', delimiter=',',create_markers=True, skip_rows=1)

    chunk.matchPhotos(downscale=2, generic_preselection=True, reference_preselection=True, keypoint_limit=80000,
                      tiepoint_limit=3000)  # , filter_mask=False, mask_tiepoints=True, keypoint_limit=40000, tiepoint_limit=4000

    chunk.alignCameras(adaptive_fitting=True)
    chunk.buildDepthMaps(downscale=4)
    chunk.buildDenseCloud()
    chunk.buildModel(source_data=Metashape.DataSource.DenseCloudData, surface_type=Metashape.Arbitrary, interpolation=Metashape.EnabledInterpolation, face_count=Metashape.MediumFaceCount)
    doc.save(filepath)
    chunk = doc.chunk
    espgCode = 27700
    localCRS = Metashape.CoordinateSystem("EPSG::" + str(espgCode))
    proj = Metashape.OrthoProjection()
    proj.crs = localCRS
    chunk.buildOrthomosaic(surface_data=Metashape.DataSource.ModelData, blending_mode=Metashape.MosaicBlending, projection=proj)
    chunk.exportRaster(path=ortho_path, image_format=Metashape.ImageFormatJPEG, save_world=True, projection=proj)
    docu_path = os.path.join(docu_path, "report.pdf")
    chunk.exportReport(path=docu_path, title=job + selected_job)
    doc.save(filepath)
24
General / Re: Defocus masks
« Last post by hairyfreak on July 29, 2021, 07:00:47 PM »
Nope, I take it back, there is something strange.

Every so often, and more often than I'd like the mask is the exact opposite of what it should be.

I think my camera positions are good?  The geometry is nice and clean and all other things (placing markers etc) appear to work as expected.  But not the generate masks.

Is this just a mistake in my camera alignment?
25
Hello

I'm currently researching the advantages and disadvantages of generating meshes from Depth Maps and Dense Clouds. My findings from trial and error are as follows.
Overall, the Depth map is a superior method of generating meshes when a large enough set of photos is available.  It creates a more detailed mesh with less noise and processing time and skips the dense cloud generating step.
The Dense Cloud method is better when a smaller sample of images is available but creates a noisier, less detailed mesh.

I was wondering if there is a detailed documentation or use case for when to use one method or the other. I couldn't find anything useful on my own.

26
General / Re: GPU Usage in Background processing mode?
« Last post by Alexey Pasumansky on July 29, 2021, 05:37:37 PM »
Hello MikePelton,

Do you observe similar problem with other GPU supported stages (image matching, depth maps generation, for example)?

Build Mesh process uses GPU only when depth maps based mesh generation approach is selected, but only during certain sub-stages. Texture blending would be performed on GPU, if there's sufficient VRAM for the required operation.

Maybe you have the processing log related to the procedures in question?
27
Hello spatialdigger,

JPEG format doesn't support embedded georeferencing information. Instead a separate .jgw file is used to store coordinates of the orthomosaic corners. Unfortunately this file contains only the coordinate values, and not the definition of the coordinate system. When importing .jpg/.jgw orthomosaic into QGIS the coordinate system needs to be specified manually.

You may consider using TIFF format (with JPEG compression, if you need smaller file size) or JPEG2000 format.
28
Python and Java API / Re: launch Metashape gui from python
« Last post by Alexey Pasumansky on July 29, 2021, 05:31:12 PM »
Hello spatialdigger,

I think you can do that using os.system command or subprocess module. Basically you need to use shell command that calls metashape executable with the path to the problematic project as an argument.
29
General / Re: image stabilisation - yes or no
« Last post by Kiesel on July 29, 2021, 04:26:48 PM »
When I think about it, the wobbling can't be imitated reproducible, but the opposite can.
So two test sequences one with the camera on a tripod and stabilization ON and the other with stabilization OFF should show the effect of it nicely.
Perhaps this test can be expanded by a third sequence with a leaning tripod in different directions for each shot and stabilization OFF.
The camera calibration results for the sequences should show some differences.

30
Hello Paul,

you helped me again!! thank you.. :-)

Yes, forgot to the API refernce guide.


Thank you again and best regards

William

Pages: 1 2 [3] 4 5 ... 10