Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - John_H

Pages: [1] 2
1
Python and Java API / buildUV: adaptive_resolution
« on: October 18, 2023, 01:24:47 PM »
This was asked four years ago, but got no replies, so I thought I'd try again...

What does:  adaptive_resolution(bool) – Enable adaptive face detalization

mean/do in the buildUV process?

Thanks

John

2
Ok, thanks Alexey - good to know I wasn't going crazy!

But just to confirm - this bug only occurs when exporting a model without textures; exporting the same model with multiple texture files works fine.

3
Hi Alexey,

I've attached a link to the psz file.

I had a look at the glb files in notepad, and the headers look ok, but the file without textures is 4 million lines long, compared to the file with textures which is 700k.

john

https://drive.google.com/file/d/1jXDKYfJErrV_nG33tLkpwKNylOXLN9Et/view?usp=sharing

4
Hi Alexey,

I've done some experimenting, and it appears the issue only appears when there are multiple texture files. If you check the attached file (filesizes.png), you can see exports for two models. I've processed both models with 1x8k texture and 4x8k texture, and exported both with and without texture.

The models with 1 texture behave as expected, the model exported without texture is smaller than the model with; however, both models with 4 textures are considerably larger when exported without textures than with!

Also, the model that has 4x textures exported with no texture is roughly four times the size of the model that has 1x texture exported with no textures, despite the fact that they should be exactly the same model!

I'm pretty sure that this doesn't happen when exporting the same models as objs (ie, the total size of the textured model is larger than the untextured, as expected)

None of this makes sense to me! 

I've also attached a sample log from the console of an export with and without texture, and a screenshot of the options (for the non-textured model it's the same, but I untick the export texture option)

Thanks,

John

5
Hi,

I have a model with approx. 3million triangles and 4x8k textures. When I export the model as Binary gITF (.glb), the model size is 140MB.

If I export the same model in the same format, but without the textures, the size is 347MB. This is the same whether I export the model with the 'export texture' option unchecked, or  first remove the textures from the model and then export it.

The same thing happens with other models as well.

FYI, the same model exported as OBJ and then all files zipped is 110MB with texture, 55MB without.

Thanks,
John

6
Hi Alexey,

Thanks for the reply - unfortunately the license situation is a bit more complex and we won't be able to install the new license server utility, so will have to go back to 1.8 for now (is it the case that projects saved in 2.0 can't be opened in earlier versions?)

Did you have any ideas about my laser scan problem?
I know Leica scanners and e57s exported from Leica Cyclone have been used succesfully with Metashape (https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLVI-2-W1-2022/105/2022/), so I don't know if it's something specific to the BLK scanner which (I think) has it's own version of Cyclone Register. I can share a copy of the laser scan data if that would help?

I've attached a screen shot showing the project as-is, in the model I've highlighted the cube map images from the scanner.

Thanks,
John

7
Thanks Kiesel.

No, it's not aerial photogrammetry, so I was expecting a panoramic image.

And the alignment did work, on low settings, but anything higher the alignment time was probitive, and even when aligned, the resulting dense point cloud was misaligned with the laser scans.

And unfortunately I use a floating license, and we will not be able to update our license server when the new version is released.

John


8
Sorry, it was the BLK 360

9
General / Some questions regarding combining laser scans and photogrammetry
« on: February 02, 2023, 01:40:50 PM »
Hi, I've searched the forums and online, but can't find relevant answers to a couple of questions. Some refer specifically (I think) to using scans from a Leica BLK and Cyclone Register, some are more general.

1. I've exported scans from Cyclone Register as E57, with the setting to include panoramic images and as seperate files (if I don't use this option, no images are present at all) and with and without E57 compatibility mode. When I import the scans into Metashape 2.0*, the scans appear with cube map images (with depth maps), ie 6 individual images from the 6 orthogonal directions. Also, whilst the images appear in the correct place in relation to the point cloud, the actual scanner position, which in the documentation appears as a blue sphere, is missing.

2. In terms of processing, do people down sample their scans? Our individual scans have approx. 60m points, and we may be using 10 scans or more in processing. In my current project, I'm using approx. 1700 images and 3 scans. I did the alignment process in low accuracy, and it took about 20 mins, but at high it was around 70% complete after 72 hours, and the remaining time was 24 hours and increasing! Even when I downsampled the scans to 1/64th, ie, around 900k points each, the processing time at high accuracy was still prohibitive. (and the downsampling seemed to destroy the depth maps?) We have a powerful PC with 128GB, an i9 processor and a 16GB GPU, and processing the project with only the images at high accuracy takes just a few hours.

3. With the alignment done on low accuracy, I then generated a dense point cloud, but it wasn't aligned with the laser scans well at all, with an offset of 10s of cms at one end of the cloud.

Just processing the model with the photos has been very succesful, but I was hoping combining it with the laser scans would improve accuracy and provide scale, but at the moment it seems more trouble than it's worth...  Is this a problem with the BLK export/import? 

Sorry for the long post!

*Also, when I tried importing the acans into 1.8, the images didn't appear at all, which is an issue as I will have to go back to that version when my 2.0 trial expires

10
General / Invalid License file when upgrading to 2.0
« on: January 25, 2023, 01:08:56 PM »
Hi,

I have just upgraded to 2.0, but when starting I get the 'no license found' message. When I try to activate manually, I find the existing license file (C:\Program Files\Agisoft\Metashape Pro\Agisoft_Lizenzserver.lic), but I get an error saying invalid license file.

We have 25 (academic) network licenses.

Any ideas?

John

Edit: Ok, I've just seen that there's an issue with floating licenses - it would have been nice to have known this before installing - something should be added to the download page. I have activated a trial license in the meantime.

11
General / GLB export loses normal map?
« on: November 03, 2022, 01:44:39 PM »
Hi,

I've created a model that needs to be exported as a GLB. It has a texture and a normal map, but when I export as a GLB and upload to sketchfab, the normal map seems to have disappeared (switching normal map on and off has no effect). If I export the same object as an OBJ, the normal map works fine...

Is this a bug, or a feature?! As far as I know, GLBs do support normal maps...

Thanks,
John

12
Bug Reports / Re: Image loading extremely slow
« on: June 09, 2022, 03:46:04 PM »
Thanks Alexey, I will try updating to 1.8.3 and check again

FYI, there are 998 cameras in the chunk, I will experiment with fewer.

John

13
Bug Reports / Image loading extremely slow
« on: June 09, 2022, 12:05:31 PM »
Hi,

I am working on a project using 16 bit tiffs. When I double click on an image to place a marker, at first it loads instantly but the process becomes progressively slower, to the point where it is taking over a minute (up to 4-5 mins!) to go from the low-res pixellated thumbnail to the full-res version. The problem seems intermittent; if I leave it for a few mins, then the process speeds up again before gradually slowing.
The images are all stored locally on an ssd. Agisoft 1.81 build 13915

Thanks,

John

14
Thanks Paulo.

GSD is ground survey distance? I am doing close range object photogrammetry, so the equivalent is just the size of each pixel on the object?

So I suppose my question is, in order to get the most accurate camera network, which number should I be concerned with? (ie, which one do I want to make smallest!). In the past I've always assumed that reducing the RMS pixel value was most important, but from your reply the mean key point size seems the most relevant value?

Thanks again for your help!

John

15
General / Question about camera optimisation and RMS error statistics
« on: March 10, 2022, 01:36:11 PM »
Hi,

I'm using my normal workflow, which is to align the cameras, then improve the alignment using gradual selection. I will select points using
Gradual selection -> image count
Gradual Selection -> Reconstruction uncertainty
Gradual Selection -> Projection accuracy
and finally
Gradual Selection -> Reprojection error

At each stage I will delete a proportion of the 'worst' points, ensuring I keep the projections for each image above 150, and then optimise cameras.
At each stage I check the info to see how the process is affecting the RMS and Max reprojection errors. The latest project I ran had the following stats after the 1st, 3rd and final steps:

RMS reprojection error   0.190378 (0.648795 pix)
Max reprojection error   0.571035 (25.5418 pix)

RMS reprojection error   0.195382 (0.470916 pix)
Max reprojection error   0.745797 (3.44066 pix)

RMS reprojection error   0.134116 (0.318879 pix)
Max reprojection error   0.441291 (1.39499 pix)

My (first) question is, what do the two numbers mean - I assumed that they were the same value, just in different units, but the dimensionless number goes up while the pixel value goes down. What, then, is the first number, and which is the 'important' value (I assume it's the pix?)

Second, slightly related, is for the optimise camera options I have been ticking only:
f, cx, cy, k1, k2, k3, p1, p2
for the first three steps, and all the options for the final step. But the options in the newest version of Metashape have changed and I wondered if anyone had thoughts on which options to use now (especially the three new advanced options)?

Also, any thoughts and comments on the overall workflow would be greatly appreciated...


Pages: [1] 2