Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - reperry

Pages: [1] 2
1
Hello,

I would appreciate your help. I wrote two scripts:
(1) one orients the model and sets the bounding box
(2) the other resets the viewpoint to something nice with the z-axis up (see below)

I found that if I run these consecutively as separate scripts, then both objectives are reached! Unfortunately when I call them from within the same script the second one doesn't happen. The viewpoint stays the same as if (2) never even ran. However, if I include a save function in between the two scripts, then both steps are successfully completed.

I'm including a minimal example below. In place of my script (1) I am doing chunk.updateTransform() which produces the same inconsistency. Please try running the code below on a project with a completed model in a chunk. Then uncomment the doc.save() line. Hopefully you will also see the resulting view depends on doc.save().

Why do I need the doc.save() command for the script to carry out the rotate() function afterwards?

I am using Agisoft PhotoScan Professional Version 1.3.3 build 4827 (64 bit)


Code: [Select]
import PhotoScan

def rotate():
    doc = PhotoScan.app.document
    chunk = doc.chunks[-1]

    T = chunk.transform.matrix
    PhotoScan.app.viewpoint.coo = T.mulp(PhotoScan.Vector([0,0,0]))
    PhotoScan.app.viewpoint.rot = chunk.transform.rotation
    PhotoScan.app.viewpoint.mag = 1000
   
if __name__ == "__main__":

    doc = PhotoScan.app.document
    chunk = doc.chunks[-1]
    chunk.updateTransform()
   
    #doc.save()
    rotate()

3
Python and Java API / Re: Normal Vectors of Cameras
« on: March 27, 2017, 06:54:32 PM »
Yes!!!! Perfect! Thank you!!!!

That works and makes sense. Thank you so much for the help.

For the cameras in the lowest loop, I now get the following angles between the camera vectors and my chunk y-axis:

2017-03-27 11:52:17 angle: 91.46877907090001
2017-03-27 11:52:17 angle: 91.46147001199684
2017-03-27 11:52:17 angle: 91.51786905965345
2017-03-27 11:52:17 angle: 91.55225732294619
2017-03-27 11:52:17 angle: 91.60681993768857
2017-03-27 11:52:17 angle: 91.59428129535009
2017-03-27 11:52:17 angle: 91.56908584035449
2017-03-27 11:52:17 angle: 91.56522374789431
2017-03-27 11:52:17 angle: 91.53343429406974
2017-03-27 11:52:17 angle: 91.61503578078066

which is exactly what i have been looking to see! This makes my day.

4
Python and Java API / Re: Normal Vectors of Cameras
« on: March 27, 2017, 05:22:17 PM »
It is the chunk coordinate system that I am interested in obtaining the camera vectors for. The coordinate system indicated by the x-y-z axes in my screenshot. The vectors for the bottom ring of cameras should have normal vectors with almost zero y-component. The other rings should all have negative y-components.

5
Python and Java API / Re: Normal Vectors of Cameras
« on: March 23, 2017, 11:29:13 PM »
I am working with a turntable capture. I have a stack of four cameras, each tilted down different amounts. I have aligned my chunk so that the rings of cameras are parallel to the x-z plane and y is up through the center of the rings (see attached picture). I want to know how far each camera is tilted away from parallel to the y-axis.

Here is my failed attempt -- the script runs, but the results don't make sense:

Code: [Select]
import math
import PhotoScan

def getTilt(chunknum=0):
    chunk = PhotoScan.app.document.chunks[chunknum]
   
    for camera in chunk.cameras:
        if camera.transform is not None:
            rot = camera.transform.rotation()
       
            #y-axis unit vector
            v0 = [0.0,1.0,0.0]
            #y-axis rotated by the camera rotation matrix
            v1 = rot*PhotoScan.Vector(v0)

            #normalize y-vector just in case
            v1length = (v1[0]**2+v1[1]**2+v1[2]**2)**(1/2.)
            v1 = [v/v1length for v in v1]

            #determine angle between y-axis and rotated y-axis with dot product
            theta = math.acos(v1[0]*v0[0]+v1[1]*v0[1]+v1[2]*v0[2])
            print('angle: {} im: {}'.format(theta*180/math.pi,camera.label[0:]))

if __name__ == "__main__":
    getTilt(0)

6
Python and Java API / Normal Vectors of Cameras
« on: March 23, 2017, 09:38:01 PM »
I am interested in scripting based on the camera angles. Specifically, I would like the cameras' normal vectors in the chunk coordinate system. These normals would be parallel or anti-parallel to the sticks displayed coming out of the blue rectangles.

How can I calculate these camera normal vectors?

I know the cameras each have chunk.cameras[0].transform.rotation(), but how do I go from this rotation matrix to a normal vector?

Any assistance would be greatly appreciated!

7
If I pick a vertex in a model (e.g. vert = chunk.model.vertices[0]), how can I quickly get a list of the vertices it is connected to via edges?

The only approach I can think of is to cycle through all the faces and check if that vertex is part of that face. If it is, then I can add the other vertices from that face to a list. This is not very elegant though. Is there a faster method to access the vertices that a specific vertex is connected to?

Thanks

8
Python and Java API / Re: photoscan dense cloud viewer in ipython notebook
« on: December 16, 2016, 07:04:15 AM »
I haven't used matplotlib for this, but I listed it as one of the things I would investigate if I were determined to keep the whole work flow within Python. Here are the 3D capabilities: http://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html. I suspect it will be very slow with 100,000 or 1,000,000 points. If you just want a preview of a point cloud, you could plot every 10th or 100th point.

From my experience, exporting a point cloud as .xyz or a mesh as .obj are the most human-readable. I would then write a little thing to read and parse the text files into a Numpy array before attempting to plot.

Good luck!

9
Python and Java API / Naming the Model within an FBX File
« on: November 30, 2016, 07:45:28 PM »
When I save as .fbx, and then load the model into 3ds Max the model is always called "Model" even though the .fbx file has a more meaningful name, "horse_statue.fbx" for example. Is there any way I can get Agisoft to name the model within the .fbx file something more meaningful. I assume "Model" is being used because that is what the object is called within Agisoft:

doc = PhotoScan.app.document
chunk = doc.chunks[0]
chunk.model returns <Model '20000 faces, 100084 vertices'>

It would help me if I could name the model itself because .fbx is just a container for a model. When the model is imported into other software, it no longer has a reference to the logically-named file it came from.

I know this is an edge case, but thought I'd ask in case anyone else wants this feature too or there is a work-around.

10
Python and Java API / Re: photoscan dense cloud viewer in ipython notebook
« on: November 30, 2016, 06:56:40 PM »
WOAH -- you can use Agisoft from an iPython notebook? I thought the scripts had to all be run from within a live instance of Agisoft. Please let me know how to control Agisoft from an iPython shell if you know  how!

As for viewing things in Python -- here are things I would look into:

matplotlib
mayavi http://docs.enthought.com/mayavi/mayavi/
blender (it allows Python scripting)

And for viewing without Python:
meshlab is great for quickly viewing Agisoft output, except .fbx
FBX Review is good for viewing .fbx output

11
Hello,

Is there any way to return the number of points within the bounding box? Agisoft clearly knows which are in and which are out for the later processing steps.

I suppose the time intensive way would be to calculate whether the coordinates of each point are in the bounding box as defined by its center, size, and rotation. I think my point coordinates are in a different reference frame than my bounding box, though. Perhaps I would need to transform them first.

Thank you,
Rebecca

12
Thank you for your reply!

13
I would like to use the number of connected components in my script to decide whether or not to remove any components. I can see the number of connected components by manually going:
Tools --> Mesh ---> View mesh statistics...

But, I can't figure out how to get the number of connected components from within my script.

Here's what I have in mind:

numtriangles = 1000
connectedComponets = chunk.model.GET CONNECTED COMPONENT COUNT
while connectedComponents > 1:
    chunk.model.removeComponents(numtriangles)
    connectedComponents = chunk.model.GET CONNECTED COMPONENT COUNT
    numtriangles+=1000

Thanks for your help!

14
I am also curious whether or not uploading yaw, pitch, and roll data into the reference pane for the cameras has any effect on the processing. Has there been an update on this topic since http://www.agisoft.com/forum/index.php?topic=370.msg1540#msg1540?

15
Bug Reports / Re: Unable to launch on a Windows 7 machine.
« on: March 03, 2016, 01:26:43 AM »
From the start menu, you can type "view installed updates" -- maybe you recently had some of the same Microsoft security updates that I do?

I am also running Windows 7. After installing Windows updates yesterday, I have been unable to open Agisoft and other programs that use the graphics card.

The updates were mostly "Security Update for Microsoft .NET Framework..." Unfortunately, there were 25 updates, so I'm not sure yet what is causing the problem.

Dell Precision
Intel Xeon CPU E5-2650 v3
NVIDIA Quadro K2200

Pages: [1] 2