Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Phogi

Pages: [1] 2 3 ... 7
1
Hi Alexey and Paulo,

I figured out what happens and Metashape seems export the GDA2020 correctly. The reason why I see the point cloud shifted in Global mapper is in Global mapper it is default treating GDA2020 needs a NTv2 transform, so when it is loading WGS84/UTM output, in Metashape's metadata, it is toWGS84(0,0,0,0,0,0,0) but in Global mapper, it applied the datum transform!

So this can be ignored now, it's not a bug, thanks, mates.

Cheers,

2
Thank you Paulo, yeah I think Metashape adjusted the datum toWGS84 parameters in the chunk correctly as GDA2020 is the new "WGS84" version so should be all 0 transformation, but the point cloud export step is using the old logic that treating the WGS84 is GDA94 so it is adding the 7 parameter datum transform there. I'm not sure from which version it has this bug but at I had a try v1.74 to v1.82 all having this error. Hope this can be fixed as soon as possible.

3
Hi Alexey,

I'm using Metashape v1.8 and I found this is a bug in exporting dense point cloud (laz for example) in GDA2020 / MGA to WGS84.

As in Metashape WKT:
'GEOGCS["GDA2020",DATUM["Geocentric Datum of Australia 2020",SPHEROID["GRS 1980",6378137,298.257222101,AUTHORITY["EPSG","7019"]],TOWGS84[0,0,0,0,0,0,0],AUTHORITY["EPSG","1168"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.01745329251994328,AUTHORITY["EPSG","9102"]],AUTHORITY["EPSG","7844"]]'

GDA2020 should be same as WGS84 which makes sense for the moden applications, and it does so when converting in Metashape, but when exporting, the WGS84 or WGS84/UTM zone 55 file will have a shift compared to GDA2020, and it is not the same as GDA94 as well.

Please have a look, thank you!!!

Best,

4
Hi Paul,

Thank you, that makes sense, should this mean the projection matrix then right? From there it could be used to draw the view-matches as it related to the essential matrix. I am looking at some weird case that the GCP marking RMSE is small enough but the DEM is lower than the GCP, probably this is another different question.

Much appreciate for your kind help Paul, legend!

Best,
Phogi

5
Hi Paul,

Thank you for the explains, I have another question regarding the camera transformation matrix, so if I know the photo's precise geolocation, which should be the translation of camera matrix relative to the world coordinate system. How will that represent in Metashape's coordinate system? It seems the CTinv * Tinv should refer to the world to internal coordinates but it does not link with the photo's geolocation though. Should I consider camera.center as the internal coordinates related to the geocentric of image GPS?

Thank you!

Best,
Phogi

6
Hi Paul,

Wow that's fantastic examples!! Thank you, it is the most detailed example I ever seen! Huge thanks!

 :) ;) :D

Best,
Phogi

7
Hi Paul,

Thank you so much, I actually found that the marker position I used was wrong! In GUI GCP002 is the first marker but then in the chunk.markers it is the third, so I was using another marker position that's why it was off a lot!

May I ask if you know how can we get the projection matrix? As camera.project() can project world coordinates into image coordinates, from my understanding the marker.position it should be meaning the u,v,w coordinates, but it can't directly change from homogenious coordinates by divide the w, so I'm confusing about which internal coordinate system does this marker.position actually refer to?

Thanks a lot!

Phogi

8
Hi Paul,

Thank you! That explains very clear about the difference. But for the image pixels it does not look like from top-left (as attached image) but more like from bottum left?

Could you explain a bit more, camera.transform.inv().mulp(marker.position) which coordinate system does marker.position and the transformed result are in? As chunk.transform.matrix.inv().mulp(chunk.crs.unproject(marker.reference.location)) from my understanding is to transform marker's world coordinates to camera coordiante system, but I tried to use x/z to calculate the pixels it does differently, so I'm not sure what's the procedure of camera.project is doing...

Thank you!
Phogi

9
Hi everyone,

I searched on the forum and found there are two ways to reproject 3D points into 2D image pixels, one is from sensor.calibration, the other is from camera.project. What is the difference between them? Furthermore how does the pixels count, from topleft corner or they've been shifted from center + cx/cy?

Could anyone help with how does camera.project works? The reason is I want to check why some markers are having higher pixel errors when they do align fine, if there's anything abnormal in the camera matrix or projection matrix, but how can I retrieve this info?

Thanks in advance!


10
Hello Alexey,

Yeah that might be the issue, I found it was duplicating the chunk and saw this on the second chunk's meta! Was this info accumulating the chunks info?

Thanks!

11
Hi Alexey,

I am looking at the chunk.point_cloud.meta and saw the key point limit is 80,000 while in GUI I was specifying as 40,000, and the MatchPhotos/tiepoint_limit is also doubled from API, is that an expected result?

My other question is could you share for camera's projection matrix and this point_cloud.cov, do they have relationship with each other?

I'm on v1.60 build 9617 by the way.

Much appreciate for your help, thank you!

Best,

12
Hi Jose,

Thanks for your kind suggestions, though I was trying to speed up with higher density point cloud reconstruction time, but your suggestions offered a good new path on how to merge the data.

I did try the mask from mesh and it is interestingly taking longer in processing. Directly running high density I'd get 63mins in all dense reconstruction time, while with the mask for AOI, the time is 72 mins, I think this is due to the cropping procedure and the whole surface was still created.

So to speed up the solution currently is still to have more powerful machine then I guess.

Thanks a lot!
Best

13
Hi Jose,

Thank you very much for your kind suggestion. I've had a try the workflow and it does creating a mask on the image! However it will then crop the whole point cloud to the selected area only. What I want to achieve is selected part can use high density of point reconstruction, while the rest part are still proceeding with for example medium density, so that for not interested area it is still processing but less resources are required.

Would that be possible in Metashape?

Best,

14
General / Is there a way to put mask from sparse point cloud to image?
« on: October 06, 2020, 02:24:53 AM »
Hi Alexey,

Is there a way to draw mask for selected area from sparse point cloud / ortho to the photos, rather than draw mask each image with same mask? What I want to do is creating point cloud with very high desnity in some AOI, is that possible? Or is it possible to create point cloud with high density in some part and lower in not interested part to speed up?

Thank you!

15
Bug Reports / Re: Possibly Wrong conversion in EPSG 5513
« on: August 28, 2020, 12:33:35 PM »
Thank you Alexey, it seems that's some issue with gdal or epsg.io, much appreciate for the details!!

Pages: [1] 2 3 ... 7