Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - jsloat

Pages: [1]
Bug Reports / Re: Issues with importing bundler.out alignment
« on: May 24, 2017, 09:25:27 PM »
I really appreciate the example plug-in to import Bundler files from Reality Capture. (I learn a lot from such examples)

As others have no doubt discovered, it is unlikely to work "out of the box" on substantial files like I've been trying.
I'm trying to use it to convert a Bundler file with 3500 cameras and it keeps crashing.

After wrestling to try to get it working for a day, my debugging shows that the format of the bundler.out
file being created in the Current Beta 1.0 version of Reality Capture has several complicating factors.

The first was ensuring that values that have exponential notation from RC, get wrapped in a Float() call to handle that.

The second was that in the case where PhotoScan didn't align certain images, there will be no transform to stuff the position data into so watch for that. If you simply skip those then you'll likely be stuffing position data into a non-corresponding image/camera.

The 3rd issue is that (at least in my case) the line record structure of the file is rather inconsistent.
For much of my Bundler file, the 3x3 transform matrix has a triplet per line, (which the plug-in tries to skip
using readline() but part way through reading the bundler file, I start encountering lines where all 9 transform values are on a single line, and elsewhere I get as many as 13 values per line, which makes the parsing logic much more complicated than this plug-in (skipping fixed numbers of lines) will handle.

My plan is to try exporting from Reality Capture in another format such as Maya and rewrite a version of this plug-in to parse that file format (hoping it is a bit more of a structured record file format ).

(Will share if I get that working)


     I still haven't found a work around for this problem. I've tried everything I can think of but can't get Networking renders to run (they claim they can't open the file, yet the GUI version can open that same file on the same machine.)
I also upgraded to version 1.3.2 today and tried it again but no improvement.

Can anyone think of anything else I should try?



The first time I opened it on a NODE host in regular interface mode, it gave me a dialog that it was in "Read Only" because the Project was also open on the Host that was trying to launch the job via Cluster Network. I closed that, and then it read without the warning.
It made my developer's mind think for a moment that the "checking the project isn't open elsewhere" feature might return as non-successful status that makes the batch NODE consider it an error, but I'm sure yourselves and customers who've been on ver 1.3.1 more a few months already would have already found and reported such a fundamental issue such as that.

In case it mattered, I tried configuring for Upper and Lower Case "W:" and "w:" in case the mount was case sensitive but no change in outcome.



I upgraded our Network Cluster Dispatch and Node host's PS from 1.2.6 to 1.3.1 and using the same configuration settings that work under 1.2.6, I get errors where the nodes can't open the project file:

2017-05-11 18:11:05 Error: Can't open project: w:/___ArtistsFolders/Jay.Sloat/Photogrammetry/arrowhead_duplicate2_test_dense.psx
2017-05-11 18:11:05 processing failed in 0.003 sec
2017-05-11 18:11:05 BuildDenseCloud: quality = Low, depth filtering = Aggressive

So Server "w" is defined / mounted as network storage to all hosts in the facility. (Windows only)
3.0.1 Dispatch Host is running command:
"C:\Program Files\Agisoft\PhotoScan Pro\photoscan --server --dispatch --control --root w:

and 3.0.1 Cluster Node Hosts are running command:
"C:\Program Files\Agisoft\PhotoScan Pro\photoscan --node --dispatch --root w:"

and these same settings worked just fine under previous release.
Fails same way whether NODE host is Win7 or Win10.

I've had it working before with these same settings under 1.2.6 and I know that the project MUST be on a server that all nodes can see, and must be in ".psx" format which is the case. The project can be opened on the NODE hosts that give errors trying to process it via Cluster/Networking.   Any ideas to try?

Los Angeles

Thank you for an answer.
This took the pain out of having to manually remove hundreds of deactivated cameras from 2500 cams per chunk, and 100 chunks.

Besides significantly improving the time to save and load (project and chunks), I think the
cluster server computing of the chunks is now using less memory and spends more time doing useful computing rather than 100% Swapping.


I am working on a plug-in that needs to remove certain cameras that I determine by proximity and orientation won't contribute much to computing a Dense Cloud per chunk. (I am able to use the API to deactivate my candidate cameras, but in another pass, I want to actually remove those cameras to make the result small enough to be savable. (when I use more than 75 chunks and each chunk has 2-3K cameras, it won't save).

I'm trying to be very careful that I'm not actually trying to remove a "COPY" of a camera from the camera list (cause I understand why that wouldn't work).

I absolutely can remove a camera using the GUI and it goes away.

But via the console window, I find that even this basic command which is not using a "copy" won't actually delete the camera (the first camera in this case).[0])

(It doesn't fail, the return status is "" or "none")
Surely I'm doing something wrong, but I can't figure out why.

    I was hoping to write a python function that could automatically assist in find outlying dense cloud points to automatically select and delete or disable them (points that have no neighbors within a certain distance).  I have successfully written functions that operate on the sparse cloud points, and although I can easily get handle on the dense cloud structure for one of my chunks,  I've not found any methods to traverse the points that make up the dense cloud and operate on them.  I see how to export them to a separate file, but that takes a long time and doesn't solve my issue.  Is there an iterating function or method to access Dense Cloud points as a list?


I just wanted to follow up, that after loading the latest NVidia drive ver 372.90 (out within the last day or two)
my GTX 1080 FE graphics card now shows up in the PhotoScan OpenCL prefs window on my Windows 10 system.

-Jay Sloat-

General / Re: Processing Box new build
« on: September 30, 2016, 03:27:42 AM »
Just wanted to point out that I just bought some killer spec'd $5000+ new computers with GTX 1080 FE cards and under Windows 10, NO GPU shows up at all in PhotoScan prefs.  However, I have several Win 7 machines with GTX 1080's and in that case the they show up and work great.   I love the GTX1080 card,  and suspect that perhaps some aspect of the NVidia driver code in Windows 10 relating to OpenCL is not implemented correctly.

General / Re: Nvidia GTX 1080
« on: September 30, 2016, 01:24:43 AM »
At my company, we have GTX 1080 FE installed on most of our computers. Some run Win 7 and some Windows 10.

As the author of this thread mentioned, in any version of PhotoScan Pro running on Windows 10, no graphics card shows up in the preferences panel of Photoscan where you would choose it.
And this is NOT going over any remote desktop, it's a direct monitor connection to the machine.

Machines running Win 7, the 1080 shows up just fine.

I looked all around the NVidia settings to see if I needed to enable OpenCL or some such but did not find anything.

Would love to have a solution soon as we have been planning to unify our environments under Windows 10.

Los Angeles

Important clarification/qualification on one of my assertions.
In a new session, with freshly loaded pristine images, GUI will indeed give an error if one tries to use "Align Selected Cameras".
(I was mistaken about this point, my power user who showed me this working, wasn't working with a completely clean slate. She has been working with some scenes that had some initial processing, that was strategically aborted and "Reset" via the interface so it "appeared pristine" when it was not.)

So here's how my Power user manually gets to place where she can process heavy sessions involving 8,000 50 Mp images and do manual "Align Selected Cameras" as follows. (I share because I think it is an interesting and insightful technique for breaking through huge solves.)

0) Start with a strong machine with 120 Gb ram, lots of cores, and strong graphics card such as Titan-X
1) Into new session load 8000 images.
2) Manually (via GUI) run "Align Photos" and let it
    a) Detect points      (which takes her about an hour)
    b) Matching Points (which takes her about an hour)
    c) Finding Pair        (which goes fairly quickly)
    d) it starts the section "Estimating Camera Locations" but she could leave it in this step more than a week with no apparent signs of progress because there is something quite non-linear in this step, So she aborts Manual processing when it enters this phase.

3) Then she saves out the scene in this state.
4) Now, you can select some sane number of images (50-100) and run "Align Selected Cameras" on them a group at a time,
and you eventually get all 8000 cameras aligned and you save that scene to file.  (This is the part I'm trying to Automate via API script)

We plan to use our standard tools to do these later steps (but I'd be lying if I said we have completed these for solves over 2500 images thus far.

5) Then we plan to standard Chunking tools to break it into between 30-50 sections and save
6) Then we plan to leverage Network Cluster batch processing of the Chunks to generate dense point Cloud.
7) Generate Model
8) Merge chunks
9) Final Texture and output.



My facility has some hellishly heavy solves ahead composed of more than 2500 very high-res images.
While I have successfully written a number of plug-ins to help us handle various issues of complexity,
however I'm hitting wall trying to emulate a particular but important GUI behavior via the API.

With so many images, we're finding even the first Camera Alignment step completely bogs down if we try to process all images at once. For a temp workaround, user is currently selecting 50 images at a time, and via the GUI running "Align Selected Cameras" which processes just the selected ones and stops. Then select the next group and process again.  I'm trying to make something more automated that can mimick this by running 50 (or variable #) cameras at a time.

I'm aware that in the API, "Align Photos" is broken into 2 parts and my plug-in is wired to do that.
I set the "selected" state for the bundle of cameras I want to process each time round the loop.
AlignCameras method takes a camera list which is great. MatchPhotos does not, and my expectation was that as long as I have the desired cameras selected, it would (or could somehow be made to ) process just the selected cameras.

The behavior I'm seeing is that as soon as I start the MatchPhotos method, it insists on trying to process all 2500 cameras (which stalls and is the reason I'm writing this plug-in in the first place.)

I've scrubbed the API doc, and the forums for any workaround or clarifications on how to emulate the selected scope processing of what the GUI  "Align Selected Cameras"  provides.

Any ideas welcomed.


I've included my plug-in below

Python and Java API / Re: Adding third party python libraries
« on: June 02, 2016, 04:07:11 AM »
I had success doing exactly this (adding several external libraries such as NumPy and others for some of my plug-ins.)

(the plug-in in question was for automatically thinning cameras that didn't contribute much by comparing images within a variable/small distance of each other AND when the orientation direction was very similar.)

Pages: [1]