Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - user

Pages: [1]
1
Hello !

You should be able to calibrate your camera at first, by define a high marker accuracy (0.01pix) and a super low tie point accuracy (1 000pix).
And then, re-inject that calibration (and fix it) in a normal optimize project (marker accuracy 0.1 pix / tie point accuracy 1 pix) to go further in the workflow !

Hello Yoann Courtois,

yes, I'm aware of this solution, but I was not sure whether PhotoScan uses the variances in order to weight the observations like it is done in weighted least squares. Thanks for pointing it out, it is indeed a quick fix.

Still, in this case the optimization process will use all points in order to create the normal equation matrix. By omitting several 100 or 1000 points it is possible to reduce the computation time in future versions.

Nevertheless, I guess a well-cleaned tie point cloud would give you the possibility to get a longer polynomial distortion model (using k3-k4 & p3-p4), whereas using only targets would give you, maybe a more accurate, but a shorter polynomial distortion model (only until k2 & p2)

At least in industrial photogrammetry there is no need for many coefficients. In most cases the lenses have a quite low distortion (unless it's a special lens like fisheye which need a special treatment anyway) and are well calibrated with 3 radial and 2 asymmetric coefficients. If necessary, one can take 2 affinity coefficients into account. For some special cases, 2 distance dependent coefficients can play a role. If we sum this up, even the worst case produces 12 unknowns (c,x0,y0,r1,r2,r3,b1,b2,c1,c2,d1,d2). Given a usual photogrammetric calibration board with roughly 1000 targets, there is enough over-determination in each image to reconstruct the distortion.

Apart from that, I'm not aware that fourth taylor-elements had any improvement when calibrated scales (and not the reprojection error) has been used to judge the accuracy.

Greetings

2
General / Re: Support for image data with more than 8 bits
« on: January 23, 2018, 04:03:05 PM »
@chrisd

This is indeed very interesting information.
Thank you for sharing your experience on this.

I will try chunk.exportPoints and check whether the color-columns will turn out more than 8 bit as well.

3
General / Re: Support for image data with more than 8 bits
« on: January 23, 2018, 02:58:31 PM »
@chrisd,

The issue with this is that many libraries already support the import of 8 and more bits but the photogrammetry software just truncates them to 8 bit after the import.

So my point was to clarify whether more than 8 bits are used in the later processing. It would be nice to have some official statement about this since this question was raised many times.

4
Industrial Photogrammetry uses coded and non-coded circular targets in order to calibrate the camera and compute the orientation. Later on the dense-cloud is computed.

In the photogrammetric community it is well known, that the detection accuracy for circular targets is superior to sift or equivalent, since they can be detected with ~ 1/100px accuracy while sifts are ~ 1/5.

Therefore it would be nice to have a feature where I could choose that PhotoScan should choose only coded and non-coded circular targets for the orientation and calibration process and sift targets to generate the dense clouds.

Thank you

5
General / Support for image data with more than 8 bits
« on: January 23, 2018, 01:15:59 PM »
Does PhotoScan provide internal support for images with more than 8 Bits Per Pixel?

I mean if it is used to produce the Dense Cloud?

It would be a benefit if the texture does not provide much contrast for the tie point generation.

6
Python and Java API / Re: Load dense point cloud as numpy array
« on: December 19, 2017, 02:19:46 PM »
Thanks Alexey ;D, it would be nice to have this feature though. I've already implemented it this way. Even on SSD drives this method is quite slow in case the point cloud is big :-\

7
Python and Java API / Load dense point cloud as numpy array
« on: December 11, 2017, 05:44:30 PM »
I've created a dense point cloud.

Now i need to store it as [ x,y,z,r,g,b ] column oriented numpy array.

Do do I achieve it?

8
Maybe I've not clarified enough.

I have circular targets (markers) which are coded or non coded.

All of them are printed on paper. This paper has certain width.

I've reconstructed the object of my interest and the targets are clearly visible in the model.

I want the coordinates of the model so I have to subtract the paper thickness for each position where the paper marker is.

In order to achieve this, I need the surface normal.

A less accurate version would be the surface normal of the object for a given 3d marker coordinate.

It will be even more precise if this normal was computed from epipolar lines and the ellipse geometry in combination with the images.

My questions are:

- Is there something already implemented which I can reuse? Surface normal for a given point? Surface normal for a target?

Thank you.


9
Dear Agisoft Team,

We need the surface normals for coded and circular targets. Is there any option which is already implemented and ready to use?

Thanks

Pages: [1]