Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - willfig

Pages: [1]
1
Bug Reports / 3D mouse controller
« on: November 19, 2020, 01:01:47 AM »
I'm having a lot of trouble using my 3DConnecion Space Mouse since I upgraded to version 1.6.  Three are two issues:
1) When I try to rotate the model it seems to zoom at the same time.  This never used to happen. I see in v1.6.5 there is an option to select the specific mouse.  I've tried this and I think it does solve the problem but seems to revert. So I leave the dialog and manipulate the model and all seems ok.  but as soon as I reset the center of rotation by double clicking I have the problem again.  When I open the navigation dialog I see the mouse is no longer selected and its back to default.
2) Previously zoom control was by pulling the mouse towards or away from you.  This is how 3D connect traning software operates also.  After the update, zoom control is via pulling the mouse up or pushing down (which previously was for Y-translation). Although there are options to invert axes, to fix this issue there needs to be an option to swap Z translation with Z rotation (I think).

Any help here?  Note I posted about this a few times in the general section but never got a response.

2
General / 3D Mouse control changed in v1.6
« on: July 30, 2020, 04:02:44 AM »
Since upgrading to version 1.6x of Metashape Pro I've noticed the behaviour of the 3D mouse is altered (using 3D connection space mouse).  To get the mouse to work as desired (and similar to its control in the 3D Connextion software) we had to invert all the axes for translation and rotation in the navigation preference.  This allowed movement of the model that matched hand movement of the mouse.  For translation this meant pulling up and pushing down moved the model up and down on the screen.  To zoom you would push and pull on the mouse.  Since upgrading, these two axes seem switched and I can't find any option to go back to this more natural setup.  Does anybody know how this can be done.  I don't think its possible to just invert any of the axes to fix this.

3
Bug Reports / Exported camera position data is not scaled
« on: June 19, 2019, 06:44:16 AM »
This was originally posted to the Python Scripting theme but is probably more appropriate here.

I'm working with some custom software that allows some photo overlay tools on dense clouds.  When I import the dense cloud and camera position data from an unscaled model, it works fine.  However, when I import the same from a scaled model, the camera positions are not correct.  We use the script below to export the .xml camera position file as well as a file that contains the transform and x,y.z position data for every camera.  I've looked these over and it looks like the position data exported from the scaled model is exactly the same as that pulled from an unscaled version of the same model.  I'm wondering if anybody knows why this is and how I can get position data for the cameras in the scaled coordinate system.

----------------------------------------------------------------------------------------------------------------------------------------
import PhotoScan
import math
import os
import json

doc = PhotoScan.app.document
chunk = doc.chunk # FIXME thyis is just the active chunk
cams = chunk.cameras

proj_path = doc.path
proj_dir, proj_name = os.path.split(proj_path)
proj_name = proj_name[:-4]

outputs = {}

cams_filename = proj_dir + '/' + proj_name + '.cams.xml'
meta_filename = proj_dir + '/' + proj_name + '.meta.json'

meta_file = open(meta_filename, 'w')

chunk.exportCameras(cams_filename)

for cam in cams:
    key = cam.key
    path = cam.photo.path
    center = cam.center
    if center is not None:
        geo = chunk.transform.matrix.mulp(center)
        if chunk.crs is not None:
            lla = list(chunk.crs.project(geo))
        center = list(center)
   
    agi_trans = cam.transform
    trans = None
    if agi_trans is not None:
        trans = [list(agi_trans.row(n)) for n in range(agi_trans.size[1])]
   
    outputs[key] = {'path' : path, 'center' : center, 'transform' : trans}
   
   
print(outputs)
meta_file.write(json.dumps({'cameras' : outputs}, indent=4))

meta_file.close()
------------------------------------------------------------------------------------------------------------------------------------------

4
Python and Java API / Exported cameras not scaled
« on: June 17, 2019, 03:55:05 PM »
I'm working with some custom software that allows some photo overlay tools on dense clouds.  When I import the dense cloud and camera position data from an unscaled model, it works fine.  However, when I import the same from a scaled model, the camera positions are not correct.  We use the script below to export the .xml camera position file as well as a file that contains the transform and x,y.z position data for every camera.  I've looked these over and it looks like the position data exported from the scaled model is exactly the same as that pulled from an unscaled version of the same model.  I'm wondering if anybody knows why this is and how I can get position data for the cameras in the scaled coordinate system.

----------------------------------------------------------------------------------------------------------------------------------------
import PhotoScan
import math
import os
import json

doc = PhotoScan.app.document
chunk = doc.chunk # FIXME thyis is just the active chunk
cams = chunk.cameras

proj_path = doc.path
proj_dir, proj_name = os.path.split(proj_path)
proj_name = proj_name[:-4]

outputs = {}

cams_filename = proj_dir + '/' + proj_name + '.cams.xml'
meta_filename = proj_dir + '/' + proj_name + '.meta.json'

meta_file = open(meta_filename, 'w')

chunk.exportCameras(cams_filename)

for cam in cams:
    key = cam.key
    path = cam.photo.path
    center = cam.center
    if center is not None:
        geo = chunk.transform.matrix.mulp(center)
        if chunk.crs is not None:
            lla = list(chunk.crs.project(geo))
        center = list(center)
   
    agi_trans = cam.transform
    trans = None
    if agi_trans is not None:
        trans = [list(agi_trans.row(n)) for n in range(agi_trans.size[1])]
   
    outputs[key] = {'path' : path, 'center' : center, 'transform' : trans}
   
   
print(outputs)
meta_file.write(json.dumps({'cameras' : outputs}, indent=4))

meta_file.close()
------------------------------------------------------------------------------------------------------------------------------------------

5
Bug Reports / Metashape 1.5.1 won't load old Batch Process files
« on: March 06, 2019, 04:28:32 AM »
Hi

I've just upgraded our network system to Metashape 1.5.1 and note that we can't load any of our old batch process files (which worked with PSP 1.4x).  Is this a known issue?  Is there a solution?

Will

6
General / Dealing with network jobs that won't cancel
« on: March 01, 2019, 07:13:47 AM »
From time to time we get a job on our network that seems to just hang.  There are no errors, and other jobs continue to process.  We can pause and try and abort these jobs from the Network Monitor but they won't cancel.  If you open the project file directly, you get the network processing dialog but also can't cancel from here.  I know one way to cancel the job is to restart the server. But that means losing all the jobs that are queued.  I'm just wondering if there is another way?  Thanks.

Will

7
General / Identification and removal of redundant photos
« on: February 13, 2019, 06:35:17 AM »
Is there any way to identify cameras (photos) that may be redundant so they can be eliminated prior to building a dense cloud?

We build models of corals to evaluate changes over time.  In some situations (usually low light/underwater visibility) we are having to collect a very high number of photos to be sure we come back from the field with enough to build models.  After we cull the blurry ones we are often left with a very high number (600+) of a single coral.  Even on our processing network some of these take two-four days for just one coral.  I think this is because of the very high overlap in all these photos.

So I'm looking for some automated tools to reduce the number of photos.  Options:
1. Randomly disable some percent.  Not ideal but could work.  Is there a tool or existing python script to do this?
2. Disable every Xth photo.  Again, tool or script for this?
3. Based on sparse alignment, use camera position and angle to determine which are redundant.  The ideal would either rank them and then allow you to select some number to highlight and disable, or you could tell it how many you want to keep and it would highlight the best ones to disable as they are most redundant.  I suspect no such tool exists but wonder if there is anything with python scripting that could be done here?

Thanks for the help.

Will

8
Bug Reports / Time estimates for network jobs not correct
« on: May 31, 2017, 06:33:32 PM »
I've been noticing that when I run a job on our network and then look at the Show Info dialog for the Chunk, the times for each step are wrong, in some cases very wrong.  For instance I just ran a batch process which I know took 4-5 hours to finish.  I can confirm this by looking at the details option for the job in the network monitor.  Here it says the Dense stage in total took a bit over 3 hours.  However the Show Info dialog says depth map generation took 2 hours and 21 min and dense cloud generation took 11 hours and 44 min.  The alignment numbers are also too large. Any idea what's going on here?

9
General / Estimating surface area for subsets of multiple aligned models
« on: September 10, 2015, 01:40:57 AM »
I've an application where I want to compare the surface area of the mesh from multiple models created independently of the same scene. But I'd like to do this for multiple sub-regions within the bounds of the models. The scene is about 5 x 12 meters and ideally I'd like to just grid this up into 1m x 1m boxes, get the surface area within each box for each independent model, and then I can compare them.  I've aligned the meshes from each scene. I don't think there is a way to overlay a grid like I've described but I'm wondering if there are options to get this information by iterating over all triangles in the mesh and summing areas into spatial bins based on the spatial position within the mesh (from coordinates of the vertices).  I have some experience with Python scripting, can this be done that way from within photoscan? Or is there a way to export the data on the vertices (x,y,z positions for each one) in a text format I can read into another scripting environment (VB, Matlab...) to do the summary? Thanks for your help.

Pages: [1]