Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - MeHoo

Pages: 1 [2] 3 4 ... 10
16
General / Re: Question about meshing
« on: August 18, 2016, 05:11:19 AM »
Bounding box seems to limit the amount of polygons possible (doesn't seem to affect dense cloud count - that seems entirely limited by memory).  Try using detail chunks and then merging chunks together.  It will retain the detail in those areas.

I had a section of a wall that was mostly flat-ish.. 2-3" deviations in depth.. nothing huge.. then there were sections I covered much more closely with many more angles that were far deeper.. I duplicated the chunk, sectioned off these "detail chunks" with smaller bounding boxes, and I re-ran them with the highest settings possible.  The results were night and day.  I then merged the chunks, including even the mesh, and the results were very obviously different quality in those sections. 

I wish this whole process was easier, bu it does seem to work in some cases.  This seems to be limited by memory.. as a dense cloud at high with a huge bounding box is nearly the exact same quality as one with a much smaller bounding box in most scenes.  If you process on low, detail out areas, process them on high and merge, they will retain their quality for the final mesh generation from my experience testing.

Hope that helps!

17
General / Re: Question about source image blur and corner softness.
« on: August 18, 2016, 03:49:14 AM »
It just might.. I am unsure. 

The model is sharp as hell, but when it gets to the texture it gets soft.. I don't get how the data can create a sharp model, but the texture gets soft.. makes me wonder where the disconnect is.

I am not using the crop, but I am contemplating it because of the slowness of the internal memory on this camera.  I can't continually shoot one-frame at a time for more than 20 or so photos before the internal memory just dies on me.. so frustrating because this sensor is AMAZING.

glancing angles are usually from other angles for detail in certain areas (I shoot a lot of urban decay with crumbled bits I get real close to, which means at f/14 or so, the background is still being aligned using those photos.  I would KILL for a depth sensor like a light field camera, and just auto-mask those points.  I am unsure, but I am hoping Photoscan is doing that internally for the mesh generation, but the texture.. obviously not. 

All I know is.. I've seen some REALLY weird results that just don't add up.

And yeah.. Sony glass is stupid expensive.  Coming from Canon, owning 400mm lenses, etc.. I am pretty upset at the lack of quality for the price.  That is another debate.  I picked up the Metabones adapter, and I want to see if Photoscan handles it well without a grid being shot.  I still have my 5DMIII's 24-105.  I will do a test soon.

18
General / Re: Question about source image blur and corner softness.
« on: August 18, 2016, 03:35:59 AM »
Yep.. Textures are the Bain of my existence.. Trying to figure out how the dense cloud can look amazing, but the resulting textures are soft.. I tried even disabling all but the best shot for that area.. Terrible results.

It's not THAT frequent, but the results are really upsetting.

Go into Mari, paint it manually,I guess.. Frustrating.

Glancing, meaning.. Not perpendicular... Photoscan seems to still use these..

19
General / Re: Question about source image blur and corner softness.
« on: August 18, 2016, 03:21:51 AM »
Are you using the Sony 16-35mm lens?  I have also noticed quite a bit of softness in the corners with this identical setup.  My scans seem to come out just fine in the end though.  One thing you should try to avoid is taking "glancing angle" shots.    As that seems to produce a lot of pixels that are "sharp enough" compared to typical megapixel cameras. You can see this after you generate and look at the tie points.  Some that are not great are pinned.  I have no information, however, if these are just used to help alignment as a "lesser" quality (worst-case if no other exists), kinda like the "estimate image quality" stage estimates things.. I'd hope so.

Still trying to figure some of these inner details, myself.

20
General / Re: Performance questions and possibly a bug...?
« on: August 18, 2016, 03:17:28 AM »
OK, Alexey, the log file is sent.  Thanks.

21
General / Re: Performance questions and possibly a bug...?
« on: August 16, 2016, 08:31:39 PM »
Yeah, I wasn't sure what was in the log because I am submitting all of this remotely.

I will try to remember to save one next time I generate a model.  And yes, color correction takes a while, but on these meshes it seems to help.

I'll send you a log next time I run a model

22
General / Re: Performance questions and possibly a bug...?
« on: August 16, 2016, 08:20:05 PM »
Emailed.  No these were previous scans, and the output texture compared to the in-view dense cloud detail was incredibly different.  No matter what I tried I could not get it to project properly.  I ended up just painting over it.

23
General / Re: Performance questions and possibly a bug...?
« on: August 16, 2016, 08:12:03 PM »
What's the email again?

24
General / Re: Performance questions and possibly a bug...?
« on: August 16, 2016, 05:06:48 AM »
Ok. further development.. It will switch between vertex color and solid rather quickly if I force the openGL renderer in the nvidia manage3d settings to the second 970GTX card.  Guessing that extra ~2 gigs of onboard memory helps..? (3 with reserved 1 gig.. ugh)

Any way to make Photoscan detect this sort of thing automatically?  This has been uber frustrating.

Also wondering if this is why my vertex color calculation step takes FOREVER at 100% for hours.. is it just switching to the solid mode and paging because the GPU can't handle the added memory?

25
General / Re: Performance questions and possibly a bug...?
« on: August 16, 2016, 04:26:06 AM »
..just loading them up tonight..

Ok.. so.. when I break these "detail chunks" off, I go in and prune the sparse cloud and shrink the bounding box.

Does "pruning" the sparse cloud matter at all?  Is this used to generate the dense cloud at all?

I am not getting any more sharpness or detail out of these scans no matter what I do.  Bounding box seems pointless unless memory is an issue.  Photoscan seems to generate a pretty damn good dense cloud from the get-go and no matter what I do after the fact, it seems that what I get from step one is what I get.  Is there anything I can do after generating a cloud, then moving up to a dense cloud, that might make "ultra high" even remotely worth the time involved?  It seems to take exponentially longer and produce very similar results to high for me.

Just curious how much manual labor might be wasted.  Thanks!

26
General / Re: Performance questions and possibly a bug...?
« on: August 16, 2016, 04:04:09 AM »
Oh, one other note.. when I re-built the dense cloud, I re-used the depth maps.. wondering if this is what caused identical output quality between scans?

I know it is hard to judge a dense cloud to a vert-colored mesh, but this seems pretty far off for doubling Photoscan's high estimate... I just feel like I am doing something wrong.  When I re-built the dense cloud after trimming down the chunk, it went from 91mil points to 108mil on the trimmed run.  So trimming it down did seem to add more detail to the area, but the mesh is still lacking in detail IMHO.  What's interesting to notice is that the previous chunk which was trimmed down from a much larger scan, ended up at 22mil polys when I generated it at double what Photoscan wanted originally for the whole wall.  The new estimate on the chunk when rebuilt went to 24mil.. so I have to believe it improved slightly at least.  it's just not perceivable.

And especially when projecting textures.. I can use 15 x 16k maps and still not get the same res as the dense cloud in areas, which makes me wonder how it can create such detailed point clouds and meshes, but the texture projection doesn't seem to use the same information?  This also leads me to believe that I am doing something wrong in my process.

Anyhoo.. I am wondering of the mesh is static for the most part because the diffuse details don't contribute enough to the shape.  I guess I just feel that the meshes are a bit softer than they should be overall.

Dense cloud vs 2x mesh density:

27
General / Performance questions and possibly a bug...?
« on: August 16, 2016, 03:44:43 AM »
Just curious, but how many of the processes are multithreaded; like switching from vertex color mesh to standard solid mesh takes a very long time on 40mil poly meshes, and I notice it is using just 25% CPU... are there plans to make Photoscan more multi-threaded?

I made a scan of a large section of a wall, and when I generated the mesh, it said High wanted about 34mil polys I think.. so I set it to custom and went to 70.  Looked soft, so I chunked it out an redid the dense cloud (at high) and mesh at custom of 40 mil (this time high said 24mil).  not any sharper in that area, but my question is.. did I need to redo the dense cloud?  What are the steps to get more resolution in details areas you break off to their own chunks?  Did I do it correctly?  Should i have re-aligned and maybe gotten a better tie-point value or something?  Is there any overhead lost by not doing this, or by not deleting the cameras that don't contribute to this new section?  The reason I know this is not a limitation of the source images is because the dense point cloud looks better than the vert-colored mesh... There is still far more detail there.

Possible bug:
Finally (for now), when generating large meshes like this, my machine with ~260GB of memory gets taxed pretty heavily, as expected, but I am noticing visual issues.. where some of my mesh is missing in the solid view immediately after the mesh generation finished, until I turn the colored mesh on..  You can see this in the two screenshots I am attaching.  I have seen this numerous times before, and the mesh is indeed missing for that lower section until I switch.  Trying to switch back to solid mesh, ALWAYS freezes at 25% CPU maxed for a very long time. 

I can go back and forth between dense point cloud and vertex colored mesh in less than a few seconds... Sounds backwards?  Is there no shading and specular or something with the colored mesh?  All I know is that it pegs 6gigs of ram (went from 100 to generate to 20 after, and trying to go to solid dumps the ram and jumps to 6 and just sits at 25% cpu.. I have not been able to get a solid mesh view at these resolutions easily.  Curious if it's the memory on the GPU limiting the system?  I have two MSI 970GTXs and one Quadro K4000 (which is set to be my display card in nvidia settings).  Any help in figuring this out would be appreciated.

Thanks!

28
Feature Requests / Re: Paring photos for alignment like dominoes
« on: August 14, 2016, 08:54:50 PM »
Having never played dominos (I was thinking of lining them all up then knocking them all down in sequence when i read the title), I was confused at first lol.

I really like the idea of a visual editor and visual connections, like a node-based graph.  As of now, I see points that match. yet they won't align. 

PT gui shows the connections with lines I believe.  I would love a more intuitive "guided" align than the marker system, because it just doesn't always work.

29
Python and Java API / Re: Filtering Tie Points
« on: August 13, 2016, 07:07:05 PM »
I am unsure how all of this works, but this has me wondering.. do the tie points have a quality assigned to them like the estimating image quality does for images as a whole?  I'd love to be able to filter my blurrier sections of my photos based on a percentage of "better" to "worse" in each image.

ie: if I could take an image with say 500 tie points, build the range of quality between the min/max points and then say, deleted all that fail to meet at least 30% quality, I can see that being useful.

I am totally guessing here, and this might already exist.. Just happened across this post and that thought came to mind.

30
Feature Requests / Re: More control over UV packing/bleed/distance, etc
« on: August 13, 2016, 06:46:29 PM »
What about more control over bleed, spacing, etc?  The amount of pixels wasted when baking a 16k map is intense.

Pages: 1 [2] 3 4 ... 10