Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - ppant

Pages: [1] 2 3 ... 5
1
I processed the data in metasape and save the point cloud in the correct UTM zone. Then use entwine to generate the 3D tiles. When loaded in the viewer point cloud are floating in the air. Not sure how to add the high offset inside metashape.  When I read more it looks like what metashape is doing, z is just the altitude geotag not further correction applied when I export data to correct UTM zone. Is there a way to correct those and bring data to the ground? Via a script

Thanks 


2
Python and Java API / Rgb depth data procesing
« on: April 05, 2022, 06:19:47 PM »

Hey I have a question. I am trying to process the image collected with Iphone with depth data.  Based on the information provided in this link https://agisoft.freshdesk.com/support/solutions/articles/31000162212-smart-cameras-with-depth-sensor-data-processing I load the depth image and color image into the project. The processed image generated after loading the color image and depth image using this methods
Code: [Select]
Metashape.app.document.chunk.importLaserScans. I am not seeing any change in image just observed the color image geo tag is lost in process image . I thought thats ok then I continue the normal work flow to align image, generate dense point cloud. My result are bad, Not sure what I am missing.  The result are bad such that with out using depth info point cloud are coming good ehough.

P

3
Alexy
I observed the problem in all data.  Basically, I only posted the section of code to export the band image. Before exporting the vegetation band image I am exporting the orthomosaic, laz file that is working fine.
But when I add this section of code to export the vegetation band it just gave me an error unsupported tif format. Not sure what is causing that

4
Alexey,
Oho, I am using the metashape version 1.6.6 build 11715. Is that a bug there on 1.6.6 ? Not sure. As I am running several scripts with 1.6.6. it is not easy for me to update our system directly to 1.7.3. without knowing all things are working. Alexy is it possible to get the demo license. 
Thanks 

5
Trying to export the vegetation index band
from python script and ran into this issue  RuntimeError: Unsupported format: .tif

Code: [Select]

vegetation_image = 'D:/orthomosaic_rdvi.tif'
chunk.raster_transform.formula = ["(B10-B6)/sqrt(B10+B6)"]
chunk.raster_transform.calibrateRange()
chunk.raster_transform.enabled = True
chunk.exportRaster(vegetation_image, image_format= Metashape.ImageFormatTIFF, raster_transform=Metashape.RasterTransformValue,
                                save_alpha=False)


Thanks

6
Python and Java API / Coded Marker and local coordinate
« on: December 10, 2020, 10:03:53 PM »
I am mosaicing a 360 image of an object . I place a 12bit  coded marker on the ground.  I wrote a python script routine to load images, detect makers add scalebar, align image, and generate a point cloud model.  That works fine. Now I have a question I want to set the coded marker 'target_1', as my (0,0,0) coordinate. If possible How I can adjust the coordinate system.  Such that target 1 will be my 0,0,0  coodiante point in a point cloud.

7
Bug Reports / Re: In consistence image mosaic for 360 pictures
« on: November 02, 2020, 04:40:19 PM »
Alexey
I have sent you the two project psz files.

Thanks

8
Bug Reports / In consistence image mosaic for 360 pictures
« on: October 30, 2020, 08:03:33 PM »
Hello I am try to mosaic the 360 image and get the dense point cloud.  For that I am taking a video going around the plant and get a frame out of that. In my background enclosed wall I have add a 12 bit coded targets  and checker pattern. With the help of 12 bit coded target I also wan to add the scale in point cloud. My results are different when I follow these two different steps in the process.

First process
1.   Load video frame
2.   Detect marker (and add scale bar)
3.   Align image
4.   Generate point cloud and export.
In this case, In the point cloud result I am seeing an artifact but I can measured the distance between coded marke  and its correct. See the attached image with image with artifact at

Second Process
1.   Load video frame
2.   Align image
3.   Detect marker (and add scale bar)
4.   Generate point cloud and export.
The image are align correctly (see attached good image) no artifact at the trunk area. But the scale bar is not set correctly the distance between the coded marker are every where.

Looks like some bug why marker scale back is not added in the second process.

Thanks

9
Alexey
Something strange is happening when the coded markers are detected when running routine to mosaic the 360 pictures (550 pictures).  When I detect the coded markers and align the image in the dense point cloud generation I am getting a Memory error( Bad allocation). But if I ran the same process without marker detection. the dense cloud point is working without memory error. But with this will run into another problem where I couldn't add the scale bar in exported point clouds.
I am using metashape version 1.6.5
Thanks

10
Hello
I am trying to see scale value in the export dense point cloud. In my imaging setup, With two cameras I am going around the object and collecting videos.  Although multiple cameras is used I think is better to put two cameras image (frame) together and align. I follow two approaches in the image mosaic But seen something strange. First,  I follow these steps load image, detect marker, add scalebar, align images, generate a point cloud.  In this case, the image is not aligned correctly. 

Second, I follow these steps  Load image, align images, detect marker, add scalebar image, and generate the point cloud. Here images are aligned but the scale bar is wrong.  Now sure what is happening



Thanks

11
In gui I can select the region and crop the dense point cloud. Is there a way to crop the dense point cloud with python script such that markers are placed at four corner of the object and use those marker corner as a region to crop.

Thanks

12
Hello
I am trying to align the turn table pictures. I  place a marker (12bit circular targets),
Code: [Select]
chunk.detectMarkers(Metashape.TargetType.CircularTarget12bit, 50) this code is detecting the markers.

When scalebars between markers  hard code. My 360 image alignments are coming  as expected.
Code: [Select]
scalebars = chunk.addScalebar(chunk.markers[0], chunk.markers[1])
scalebars.reference.accuracy = accuracy
scalebars.reference.distance = 0.1765

scalebars = chunk.addScalebar(chunk.markers[1], chunk.markers[2])
scalebars.reference.accuracy = accuracy
scalebars.reference.distance = 0.1765

scalebars = chunk.addScalebar(chunk.markers[2], chunk.markers[3])
scalebars.reference.accuracy = accuracy
scalebars.reference.distance = 0.1765


Instate of hard coding i try to make this addition bit dynamic  like below. My image aligments if going every where. Not sure why this is happening. Or what I am doing wrong. 
Code: [Select]

accuracy = 0.0001
pairings = ['1_2','2_3','3_4']
scale_values = {
'1_2': 0.1765, '2_3': 0.1765, '3_4': 0.1765}
markers_name = {}
for marker in chunk.markers:
markers_name.update({marker.label.replace('target ',''): marker})

print(markers_name)
for pair in pairings:
    a, b = pair.split('_')
    if (a in markers_name.keys()) and b in markers_name.keys():
        scalebars = chunk.addScalebar(chunk.markers[int(a)-1],chunk.markers[int(b)-1])
        scalebars.reference.accuracy = accuracy
        scalebars.reference.distance = scale_values[pair]

Thanks

13
Alexey,
In the script, you can see all steps are applied.

load image read mask . load camera information. (After these steps I see all the camera precalibrated parameters are loaded and mask image ). I also add filter_mask = True . Mosaic is working but

I further notice that in metashape, the masking effect (choice of image mask how we prepared) how images are aligned. In some cases of mask, images are not aligned. In some cases partial alignment and in some cases full alignment.  I am not sure what is going on.  In the mask image, I set all object value to be 254 and background 0. 


Thanks

14
Alexey
I  check the camera calibration and my lens calibration information (pre calibrated information) are loaded before matchphotos.  On thing in my mask is rectangular shape (generated for all image). I notice that altering the square mask  my result are different. On more when I make my mask just covering the object,  tie point are to few and it failed. because of that I create the rectangular mask. Do  have relation such that object should cover x% in an image to get good result.

Thanks

15
Not sure where is the problem. I have further addition to problem.  Based on  mask my results are  random.

Thanks

Pages: [1] 2 3 ... 5