Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - John_H

Pages: [1]
1
Python and Java API / buildUV: adaptive_resolution
« on: October 18, 2023, 01:24:47 PM »
This was asked four years ago, but got no replies, so I thought I'd try again...

What does:  adaptive_resolution(bool) – Enable adaptive face detalization

mean/do in the buildUV process?

Thanks

John

2
Hi,

I have a model with approx. 3million triangles and 4x8k textures. When I export the model as Binary gITF (.glb), the model size is 140MB.

If I export the same model in the same format, but without the textures, the size is 347MB. This is the same whether I export the model with the 'export texture' option unchecked, or  first remove the textures from the model and then export it.

The same thing happens with other models as well.

FYI, the same model exported as OBJ and then all files zipped is 110MB with texture, 55MB without.

Thanks,
John

3
General / Some questions regarding combining laser scans and photogrammetry
« on: February 02, 2023, 01:40:50 PM »
Hi, I've searched the forums and online, but can't find relevant answers to a couple of questions. Some refer specifically (I think) to using scans from a Leica BLK and Cyclone Register, some are more general.

1. I've exported scans from Cyclone Register as E57, with the setting to include panoramic images and as seperate files (if I don't use this option, no images are present at all) and with and without E57 compatibility mode. When I import the scans into Metashape 2.0*, the scans appear with cube map images (with depth maps), ie 6 individual images from the 6 orthogonal directions. Also, whilst the images appear in the correct place in relation to the point cloud, the actual scanner position, which in the documentation appears as a blue sphere, is missing.

2. In terms of processing, do people down sample their scans? Our individual scans have approx. 60m points, and we may be using 10 scans or more in processing. In my current project, I'm using approx. 1700 images and 3 scans. I did the alignment process in low accuracy, and it took about 20 mins, but at high it was around 70% complete after 72 hours, and the remaining time was 24 hours and increasing! Even when I downsampled the scans to 1/64th, ie, around 900k points each, the processing time at high accuracy was still prohibitive. (and the downsampling seemed to destroy the depth maps?) We have a powerful PC with 128GB, an i9 processor and a 16GB GPU, and processing the project with only the images at high accuracy takes just a few hours.

3. With the alignment done on low accuracy, I then generated a dense point cloud, but it wasn't aligned with the laser scans well at all, with an offset of 10s of cms at one end of the cloud.

Just processing the model with the photos has been very succesful, but I was hoping combining it with the laser scans would improve accuracy and provide scale, but at the moment it seems more trouble than it's worth...  Is this a problem with the BLK export/import? 

Sorry for the long post!

*Also, when I tried importing the acans into 1.8, the images didn't appear at all, which is an issue as I will have to go back to that version when my 2.0 trial expires

4
General / Invalid License file when upgrading to 2.0
« on: January 25, 2023, 01:08:56 PM »
Hi,

I have just upgraded to 2.0, but when starting I get the 'no license found' message. When I try to activate manually, I find the existing license file (C:\Program Files\Agisoft\Metashape Pro\Agisoft_Lizenzserver.lic), but I get an error saying invalid license file.

We have 25 (academic) network licenses.

Any ideas?

John

Edit: Ok, I've just seen that there's an issue with floating licenses - it would have been nice to have known this before installing - something should be added to the download page. I have activated a trial license in the meantime.

5
General / GLB export loses normal map?
« on: November 03, 2022, 01:44:39 PM »
Hi,

I've created a model that needs to be exported as a GLB. It has a texture and a normal map, but when I export as a GLB and upload to sketchfab, the normal map seems to have disappeared (switching normal map on and off has no effect). If I export the same object as an OBJ, the normal map works fine...

Is this a bug, or a feature?! As far as I know, GLBs do support normal maps...

Thanks,
John

6
Bug Reports / Image loading extremely slow
« on: June 09, 2022, 12:05:31 PM »
Hi,

I am working on a project using 16 bit tiffs. When I double click on an image to place a marker, at first it loads instantly but the process becomes progressively slower, to the point where it is taking over a minute (up to 4-5 mins!) to go from the low-res pixellated thumbnail to the full-res version. The problem seems intermittent; if I leave it for a few mins, then the process speeds up again before gradually slowing.
The images are all stored locally on an ssd. Agisoft 1.81 build 13915

Thanks,

John

7
General / Question about camera optimisation and RMS error statistics
« on: March 10, 2022, 01:36:11 PM »
Hi,

I'm using my normal workflow, which is to align the cameras, then improve the alignment using gradual selection. I will select points using
Gradual selection -> image count
Gradual Selection -> Reconstruction uncertainty
Gradual Selection -> Projection accuracy
and finally
Gradual Selection -> Reprojection error

At each stage I will delete a proportion of the 'worst' points, ensuring I keep the projections for each image above 150, and then optimise cameras.
At each stage I check the info to see how the process is affecting the RMS and Max reprojection errors. The latest project I ran had the following stats after the 1st, 3rd and final steps:

RMS reprojection error   0.190378 (0.648795 pix)
Max reprojection error   0.571035 (25.5418 pix)

RMS reprojection error   0.195382 (0.470916 pix)
Max reprojection error   0.745797 (3.44066 pix)

RMS reprojection error   0.134116 (0.318879 pix)
Max reprojection error   0.441291 (1.39499 pix)

My (first) question is, what do the two numbers mean - I assumed that they were the same value, just in different units, but the dimensionless number goes up while the pixel value goes down. What, then, is the first number, and which is the 'important' value (I assume it's the pix?)

Second, slightly related, is for the optimise camera options I have been ticking only:
f, cx, cy, k1, k2, k3, p1, p2
for the first three steps, and all the options for the final step. But the options in the newest version of Metashape have changed and I wondered if anyone had thoughts on which options to use now (especially the three new advanced options)?

Also, any thoughts and comments on the overall workflow would be greatly appreciated...


8
General / Sharpening filters - yes or no?
« on: October 18, 2019, 02:37:19 PM »
I was wondering if there was any general consensus/theory over the use of sharpening filters as part of a pre-processing workflow. Normally I wouldn't do anything apart from colour calibration, but I was wondering if anyone had any experience/advice of using sharpening filters (ie, in Lightroom) on their raw images before aligning?

I've searched the literature, and a lot of it is too technical, but a mild amount of sharpening (not enough to introduce extra noise) seems to be a part of many people's workflows. I'm nervous about doing anything that might alter the raw image, however.

I am generally talking about close-range photography using macro lenses, though do some room-scale projects as well. Metric accuracy of the final model is the key requirement.

Thanks,

John

9
General / Auto detect not recognising coded targets
« on: May 28, 2019, 01:13:01 PM »
Attached is an image from a project which uses a scale bar featuring 12 bit coded targets.

The targets differ from the ones that come with Agisoft, in that they don't have a central black circle at the center of the target, but we've used these scale bars before, and Agisoft had no problem autodetecting the targets.

I'm guessing the problem may be the targets are too big in the image, (in the previous project they were far smaller) which would be annoying as the project requires close range detailed images. However, the target's white central circle appears to be under the 30pix diameter (25-29ish? see second attached image) that seems to be the maximal limit.

I've tried running the auto detect at every tolerance, inverted and non-inverted but with zero results. Detecting uncoded circles is partially successful; some of the targets are recognised in some images, along with full stops on the scale bars and even  bokeh circles from the out of focus background!

If the problem is the size, I'm guessing the solution would be to take an additional set of photos, with different camera settings,  from further away and then combine the two camera networks?

Any other ideas?

Pages: [1]