Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - MeHoo

Pages: [1] 2
Bug Reports / corrupted installer?
« on: October 11, 2017, 12:29:53 AM »
x64 win.  Just trying to get my photoscan updated and it takes forever to download then tells me a cab file is in use.

default settings

I've turned everything i have off to protect myself and redownloaded it a million times now.

I get this:

Bug Reports / Error reading dense cloud, bad stream.
« on: September 22, 2016, 02:48:41 PM »
Tons of these errors while rotating, etc.  Rebuilt cloud already.. Same thing.

General / Merging chunks question.
« on: August 26, 2016, 06:07:27 AM »
Merging chunks that only have certain data, say, to rebuild a file from a python script.. for instance, say I ran a script to generate meshes, but now I want to re-import the original source data, such as sparse cloud, dense cloud, markers, etc. 

Merging two chunks with:
a) cameras and depth maps (only)
b) dense cloud & markers (only)

takes an hour or two?  Nothing in either of those two conflicts, right?  It's just merging two data sets into one, or am I mistaken?  Trying to find the best method to strip data, process, then bring it back in, but it's just slow as-is.  Any help is appreciated.  as always, I assume I am doing something wrong. :p

I'd LOVE a way to duplicate chunks and also strip data by bounding box-as-it's-duplicated to speed that process up.  This is my alternative, and its still lagging behind.  If all I care about right now is meshes, then later I want to add back the other info to say, build textures, I shouldn't have to wait at all.. it's all in the same space.  No aligning necessary.  It all originated from the same chunk. 

In my mind, I'm simply combining two data sets without any over-riding files.. should be a two second job.


Python and Java API / A little script help?
« on: August 22, 2016, 07:38:30 PM »


This is hard-coded to work left to right, and top-down if your bounding box is set up properly. (top view - 7 on keyboard)

I need this to be configurable.  Would it be hard for someone to add check-boxes or a drop-down to be able to select the view?  Front, Right, Top, etc? and have the selections match that designated start angle?

Also, having not gotten that far through the script yet, are the other processes configurable?  Like if I want each chunk to run on very high, and generate textures with 1 x 16k map, etc?

Just curious if anyone who knows python can lend a hand?

Thanks in advance!

General / Performance questions and possibly a bug...?
« on: August 16, 2016, 03:44:43 AM »
Just curious, but how many of the processes are multithreaded; like switching from vertex color mesh to standard solid mesh takes a very long time on 40mil poly meshes, and I notice it is using just 25% CPU... are there plans to make Photoscan more multi-threaded?

I made a scan of a large section of a wall, and when I generated the mesh, it said High wanted about 34mil polys I think.. so I set it to custom and went to 70.  Looked soft, so I chunked it out an redid the dense cloud (at high) and mesh at custom of 40 mil (this time high said 24mil).  not any sharper in that area, but my question is.. did I need to redo the dense cloud?  What are the steps to get more resolution in details areas you break off to their own chunks?  Did I do it correctly?  Should i have re-aligned and maybe gotten a better tie-point value or something?  Is there any overhead lost by not doing this, or by not deleting the cameras that don't contribute to this new section?  The reason I know this is not a limitation of the source images is because the dense point cloud looks better than the vert-colored mesh... There is still far more detail there.

Possible bug:
Finally (for now), when generating large meshes like this, my machine with ~260GB of memory gets taxed pretty heavily, as expected, but I am noticing visual issues.. where some of my mesh is missing in the solid view immediately after the mesh generation finished, until I turn the colored mesh on..  You can see this in the two screenshots I am attaching.  I have seen this numerous times before, and the mesh is indeed missing for that lower section until I switch.  Trying to switch back to solid mesh, ALWAYS freezes at 25% CPU maxed for a very long time. 

I can go back and forth between dense point cloud and vertex colored mesh in less than a few seconds... Sounds backwards?  Is there no shading and specular or something with the colored mesh?  All I know is that it pegs 6gigs of ram (went from 100 to generate to 20 after, and trying to go to solid dumps the ram and jumps to 6 and just sits at 25% cpu.. I have not been able to get a solid mesh view at these resolutions easily.  Curious if it's the memory on the GPU limiting the system?  I have two MSI 970GTXs and one Quadro K4000 (which is set to be my display card in nvidia settings).  Any help in figuring this out would be appreciated.


Bug Reports / Orthographic viewport clipping plane issue.
« on: August 12, 2016, 03:36:17 PM »
This should have an option to adjust how sensitive this is.  Odd because it's the opposite in most other software I use.. perspective has a clipping issue where ortho sees infinite.

Any idea how to fix this?

Feature Requests / More control over UV packing/bleed/distance, etc
« on: August 12, 2016, 02:35:59 PM »
Adaptive orthophoto/mosaic works better in this regard, but using general/mosaic with the same resolution map size gives huge gaps between the millions of tiny Uv islands.  There should be a tolerance to dial in for the amount of stretching before an island is broken off, the distance between islands when packed, and the amount of pixel bleed.

These are necessary for use in other softwares.

Also.. I just exported a brick wall scan and the UVs were flipped randomly.  They should all be facing the same direction.  This significantly affects any bump/normal mapping calculations if one were to do that.

Also.. on this project, everytime I open the file, or the viewport resets, my geometry and bounding box are off to one side.. does the "fitting" of the viewport always look at the sparse cloud or all cameras?  When I decide to part out a scene to smaller chunks, I delete dense cloud points and reduce the size of the bounding box, but I hardly ever worry about the sparse cloud.  Is that affecting me negatively in any way?

Just curious what the best steps should after doing an alignment of a larger area, then deciding to detail out parts and break off chunks to separate files for further refinement should be.  Once I have a dense cloud of the whole area, say, for instance, 150' of an alleyway wall.. and I choose a 20' section to be my next asset, I delete all of the dense cloud points I don't want, shrink down my bounding box and then.. should I re-do the dense point cloud generation?  Should i delete the cameras that absolutely don't contribute?  Or can I go straight to making the mesh using dense cloud points? 

What is the best method here for quality, and also alternatively, speed?

Is the dense cloud the best it will ever get upon generation at any size, or is it's resolution determined/limited by the bounding box in some regards?

Thanks in advance.

I was thinking of the easiest method to align my project to a known coordinate system, and I thought if we could just create a plane in Photoscan, export it to our package of choice, scale, and rotate and move it to origin, then re-import this, couldn't this be a very quick method to orient and scale a scene? 

Or am I missing a feature already that I can use?  Rotating and scaling the bounding box and mesh without any sort of dialogs is painful at best.. There is very little precision to it in a software that is all about precision... :/

What is the best method, currently, to orient a scan to origin at the proper scale?  I can create a scale bar easy enough, but the orientation is a bit more difficult, especially with a point cloud.  I hate processing a dense cloud for 40 hours only to find that my bounding box isn't set up properly.

Any help appreciated.

I am using the Nvidia GPU utilization tool.  My main display is running off of my Quadro K4000 since it's the weaker card for openCL calculations compared to my other two GTX 970s in the same system.

When building dense cloud, it shows no utilization on the GTX cards at all.  Is there some sort of conflict going on here, or should I be using another tool to monitor that works better (is it the monitor or photoscan?)?

I have seen reports in the log after some scans that the GPUs are working, as they show the 970's being twice as fast as the Quadro, but right now, it appears that they are doing nothing.

Also.. if I run other software that is highly GPU dependent, such as Knald, and dedicate it to my Quadro, would Photoscan have a struggle between the other software, or would they both just run slower?  I ask because I am getting program crashes with Knald, and I am waiting on this scan to finish to run more tests with the Quadro left out of the scans, etc.

Any info from anyone with any experience with something similar, please let me know.  This whole area seems sort of mystical to me. ;p

General / DNG import options?
« on: July 23, 2016, 04:42:31 PM »
The addition of DNG as an image type is great, as it would eliminate any need to save a thousand TIFF files after uploading the DNGs into lightroom, but it does limit a few things, such as setting custom white balance and color profile, which are about the only two things I do to post process my photos before saving out my TIFF files. 

Do you forsee Photoscan having the ability to load a custom color profile saved out via x-rite color checker passport?  I shoot a white card and a macbeth chart for every shoot.  It would be excellent to see this ability added.


General / Typical FBX export speed?
« on: July 16, 2016, 06:54:11 PM »
I realize this is one of those "how long is a piece of string" questions, but I have to wonder if something is wrong.  I've let FBX exports on ~15m face meshes with vertex normals go for hours and seen 0% progress.  OBJ takes a few seconds.

Dual quadcore, 256gb ram, 3x gpus.  Latest Photoscan as of this writing.

I am merely trying to get a mesh with the vertex colors out of photoscan to see if they are even usable for me.

General / Calculating vertex colors... Why?
« on: July 13, 2016, 03:40:54 AM »
Curious what this step is actually doing (under the hood), why it's sooo slow, and if it's even necessary for my needs.  Just wondering about the process itself, and if there's a way to eliminate it or make it faster.

Thanks in advance!

Feature Requests / bublcam support.
« on: September 01, 2015, 12:49:21 AM »
Spherical 360 stitched cameras are becoming more and more common.  It would be nice to have an option similar to the fisheye option to import these photos.

I admit, I've yet to try this with fisheye, but because of the innacuracies of the stitching, I can't see it working.  It needs to have a sort of tolerance added to the areas of stitching.  PHotoscan would need to know where these are and treat them differently or mask them.

Feature Requests / Manual point registration in pro version.
« on: September 01, 2015, 12:46:22 AM »
Autodesk 360 can do this... You take two photos that just won't align then you manually mark up points on each that match.  We also do this with ptgui for stitching panoramas. 

If this is already a feature please point me to where in the manual I can read how to do it.  If it isn't.. why not?

Seems a no brainer to me.

Pages: [1] 2