Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - wizprod

Pages: [1] 2 3 ... 6
1
General / Re: Insta360 double Fisheye DNG's
« on: December 26, 2023, 12:47:51 AM »
Thanks for the comments, much appreciated.

I will see if I can generate some non-sensitive, photogrammetry suited sample data. In the meantime, You can find some DNG's here: https://drive.google.com/drive/folders/1TACxCBcZrJ2P-qQNLdbzphU_5G0CVjcB

I managed to make ChatGPT 4 make me a python script, that would split the images into seperate folders, so Metashape would easily import these as a multi camera project. After trying to figure out pixelpitch and focal length (this was a pain, with little to no hard spec on the sensor), I managed to get a very decent result, where some manual markers had 0.3 px precision as compared to the 3 px precision when I was using the equirectangular images made with Insta360 Studio.

So far, so good!

2
General / Re: Insta360 double Fisheye DNG's
« on: December 22, 2023, 12:45:46 PM »
Thanks!
Do you have an automated way to do this? This would be a pain to do for 500 shots :D

3
General / Insta360 double Fisheye DNG's
« on: December 19, 2023, 05:04:17 PM »
Hi,

Is there a way to import dual-fisheye DNG's from the Insta360 RS One 1 inch?

I can of course get it to stitch after converting to 2:1 ratio equirectangular images, but hoping I can better image quality without heavy image processing step in Insta360 Studio.

4
General / Re: Insta360 building scan
« on: December 02, 2023, 06:40:22 PM »
Why not just use the Fisheye preset un Camera Calibration? What happens if you try?

Additionally, Metashape works well with full 360 deg images, maybe that could make it easier for you ?

5
Hi all,

I'd like to request that the Calibrate Reflectance feature in Metashape also support the Emperical Line method using 2 or more grey patches.

I'm using the Mapir T3-R50 (https://www.mapir.camera/en-gb/products/diffuse-reflectance-standard-calibration-target-package-t3-r50), which has 4 patches.

More info on ELM here: https://www.mdpi.com/2072-4292/15/11/2909

6
Bug Reports / Re: RuntimeError: Can't decompress depth
« on: October 09, 2021, 06:43:58 PM »
I have gotten the same error in 1.7.4 in an almost brand new system with 128 gb of ram. I have run mem-test, and there is nothing wrong with the memory.

Additionally, I am using normal jpegs in my project, so why is this failing in openexr?

7
Tried rerunning the blending, this time only with the 16 gb gpu selected in settings. now it run the blending fine on GPU, but the overall process is now measurably slower, as I am only using 1 GPU throughout.

8
Hi,

I am facing a problem when Blending Textures during Tiled Model Creation.
I have 2 GPU's, one with 4gb of vram, and one with 16 gb.
Both GPU's are being used in earlier stages of the Tiled Model creation.

However, when Metashape reaches the Blending Textures stage, it reverts to CPU only, as the 4 gb GPU doesn't have enough ram(○~5 gb needed).

What I don't understand is why it just doesn't continue on the 16 gb GPU only?

Right now it takes ages, even on a i9-11900K with 128 gb RAM.

Code: [Select]
Found 3 GPUs in 0 sec (CUDA: 0 sec, OpenCL: 0 sec)
Using device: AMD Radeon (TM) R9 Fury Series (Fiji), 64 compute units, free memory: 4046/4096 MB, OpenCL 2.0
  driver version: 3276.6 (GSL), platform version: OpenCL 2.1 AMD-APP (3276.6)
  max work group size 256
  max work item sizes [1024, 1024, 1024]
  max mem alloc size 3264 MB
  wavefront width 64
Using device: AMD Radeon VII (gfx906), 60 compute units, free memory: 16304/16368 MB, OpenCL 2.0
  driver version: 3276.6 (PAL,HSAIL), platform version: OpenCL 2.1 AMD-APP (3276.6)
  max work group size 256
  max work item sizes [1024, 1024, 1024]
  max mem alloc size 13695 MB
  wavefront width 64
loaded selector in 0 sec
selected 807 cameras in 5.378 sec
selected 152 blocks in 0.053 sec
loaded model blocks in 0.246 sec
loaded uv blocks in 0.165 sec
initializing renderer... done in 0.208 sec
Blending textures...
All cameras are fine
Activating context...
calculating mesh connectivity... done in 0.88 sec
rendering 0 to 149 pages
Initialized texture renderer
Configuring pipeline...
Initialized memory broker
Configuring mosaic pipeline with outliers filtering
Constructed pipeline
Relaxed precision enabled
Collecting memory requests...
Allocating memory requests...
Estimated required video memory: 5570 MB
Estimated device memory: total 3840 MB, used 40 MB, available 3243 MB
Cannot use GPU. Reason: VK Error : VkResult is "ERROR_OUT_OF_DEVICE_MEMORY" at line 12
Performing blending on CPU...

9
Bug Reports / Re: Vertical Datum out of range 1.7.1
« on: April 03, 2021, 01:26:36 AM »
I am also having this problem a lot! Usually happens at some point after converting Cameras to another datum. And especially prone to happen if Markers and Cameras are not converted at the same time, or you use the Update from Exif, resulting in a Datum mismatch between Markers and Cameras.

And the real mess is that there seem to be no remedy other than do the project again from scratch :(

Also occuring in 1.7.2.

10
Updated to 20.11.2.

Alignment works with both GPU's.
Making the mesh from depthmaps does not. Same error message.


11
Any progress on this one?
More troubleshooting I can do to help ?

12
This also fails now, after messing around with enabling the different GPU....

13
Thanks for the reply. Further testing revealed that this is only happening when the Radeon VII is included in the processing.

Radeon Software Version:
20.9.1 (recommeded driver from AMD)

OS Version:
Windows 10 2004

Log attached is first with the R9 Nano, which processes fine, and then with the Radeon VII that fails.

14
Hi,

I have a reproducible problem, whenever I use depth-maps for anything (DEM, MESH, Ortho etc) processing.

I get the attached error, saying ciErrNum: CL_OUT_OF_HOST_MEMORY (-6) at line 207

I have 2 GPU's:
AMD Radeon R9 Nano with 4 GB VRAM
AMD Radeon VII with 16 GB of VRAM


System has 32 GB of RAM.

Any hints to what goes wrong and how to fix it ?

15
General / Re: Camera Position Accuracy Setting For PPK + GCP
« on: May 02, 2020, 10:54:43 AM »
My photos are geotagged using PPK however I did not have accurate coordinates for the position of the base station.  So relative camera position accuracy is high but absolute world coordinate position accuracy is low - typical GPS (3m horizontal, 10-20m vertical)

I also have several GCPs (I was unable to set the base station on one of them) which I have accurate coordinates for.

Theoretically I would think the PPK generated camera positions would improve relative accuracy of the ground surface between GCPs, where the GCPs would correct the absolute positional accuracy. 

Your GNSS base position and coordinates need to be the same for your GCP's and your PPK for the camera's, otherwise you will have a constant inconsistency between the camera positions and the GCP positions.

Pages: [1] 2 3 ... 6