Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages -

Pages: 1 2 [3] 4
General / Re: Shelling photoscans
« on: February 19, 2014, 07:52:13 PM »
Hey there Igor,

The way we do it here at Scanlab is through zBrush.

1. Import your model
2. Duplicate the tool
3. Dynomesh the duplicate to 128 or something low
4. Deflate the duplicate to -6 or whatever works in your case.

5. Dynomesh the original tool
6. Merge Down the two tools
7. PolyGroups - Auto Group the new combined tool by continuity
8. Isolate and make visible only the inner Group(the one that's gonna be used as a boolean object)
9. PolyGroups - Group As Dynamesh Sub
10. Make the whole tool visible and Dynomesh again at required resolution.


Feature Requests / Restrict Extrapolation from growing into masked areas
« on: December 22, 2013, 10:30:46 PM »
Sometimes the growth escapes into the restricted areas where the desired effect is for it to be contained in a non masked region.

Feature Requests / Ignore Masked regions for Texture reconstruction
« on: October 23, 2013, 07:02:03 PM »
PS doesn't seem to ignore the areas that are masked when doing Texture reconstruction.
It'd be really nice if it did.

Feature Requests / Re: PSD Texture Mask Collage
« on: October 21, 2013, 09:19:15 AM »
Hello all,

Actually, it is possible to export textures based on the individual cameras using Python scripting. The same is valid for orthophoto generation based on the individual cameras.

I don't think its a viable option for 40+ cameras. :S

Another benefit of having this sort of layer approach available to us is that it would enable us to troubleshoot misaligned cameras much easier. Since Build texture does not discard portions of photos but builds the whole thing whether its a match or not.

Let me add this image to show you what I mean.

It's a partial screen grab of a UltraHigh build spot, which is
1. well inside of the bounding box,
2. has no masks to prevent the area from being generated
3. builds fine when doing a full high build

and, since this cut off line is so obvious(white area is the actual surface, dark area is the opposite side, not facing the camera), I make an assumption that Photoscan discards a few photos, which are actually needed to get this area built.

I wish I could build this whole thing at ultra high, so it would reconstruct everything, but I cannot due to my 24gig RAM limit. But, even then, this makes me wonder if there are other spots which PS could potentially do a better job at building, but doesn't due to the insufficient BB size that does not account for this potential photo discardment / loss.

I think I'm starting to go in circles now. :)

Hello Ruslan,

In the latest version PhotoScan uses only points from the bounding box for depth maps estimation.

However, you can reconstruct the full dense point cloud and for mesh reconstruction use smaller working volume.

I think that's the issue, it does not use all points from the BB volume.  The dense point cloud estimation changes as I expand the bounding box, making me scale the BB to a much larger volume than the reconstructed surface needs to be.

I remember someone in another thread was mentioning that their reconstruction was erroring out and you suggested making the BB volume larger. They might've have had the same problem I'm having, where the BB volume does not encapsulate or include enough photos for reconstruction. Its just a guess though.

Bug Reports / Bounding Box size & Photo Set & Dense Point Cloud behaviour
« on: October 18, 2013, 01:23:24 AM »
Bounding box size seems to affects number of photos used when building the point cloud.

Is this the correct behavior, because when I build a small part of an object I expect the whole photo set be searched for the corresponding area inside the bounding box. But there seems to be some optimization that discards certain photos, which defeats the purpose of having a resizable bounding box because not everything withing the bounding box ends up being build.

Sounds a little strange doesn't it?

PS Version 1.0.0 build 1736.

Feature Requests / PSD Texture Mask Collage
« on: October 12, 2013, 09:57:34 PM »
This might sound a bit far fetched, but I was wondering if it would be possible to somehow make PS export a version of a texture consisting of a collage of all photo masks.

The attached image is an example of a mesh generated with PS, zBrushed and reImported for texture projection.

What you see is an Onion skin looking texture. What I want to have is to be able to selectively paint out a mask of a projection, which would better contribute to the desired look of my subject.

So, basically, every "Onion" layer, would be a Photoshop layer of a single projection, however many cameras there are, and every layer would have a mask associated with it to reveal parts of a projection which are contributing to the "Final Texture".

Feature Requests / Re: Grid
« on: October 10, 2013, 06:42:11 PM »
I'd also like to have this feature, since sometimes when trying to orient models I have to use Photoscan window borders as a queue in Ortho mode, which is sort of weird.


General / Re: 3D printing - Mcor
« on: October 10, 2013, 06:24:53 PM »
I recently got my Form1 printer from FormLabs and printed one of my scans.
Unfortunately I was in too much hurry that I did not set my printer settings toits highest resolution so the print turned out pretty low res.

Check out the attached image!

Feature Requests / Filter Points by Photos
« on: October 01, 2013, 03:41:22 AM »
Would it be possible to implement such a filter.

The reason for it is to make it easier to figure out which photos are misaligned and need to be reset and realigned or masked.

The way it would work is I'd generate my aligned photos,
see where things don't fit or have not aligned properly,
select a few points from this misaligned set,
Filter Photos by Points - this would show me all the photos which have possible misalignment issues,
then select these photos and Filter Points by photos to see this whole set of points.

Not sure if I'm making any sense here.


It's also interesting that your lens details are omitted from the EXIF data.

Mind you, I'm not trying to imply anything by saying this...rather, I'm trying to point out that it is near impossible to draw any conclusions regarding the viability of the D3200 unless all the variables are known.

After all, if you are using a prime lens that cost $1000 to achieve these results it pretty much makes the point of even using this camera in the first place moot.   ;)



Oh wow, I'll take that as a compliment!

The image was most likely taken with a regular manual  f/2.8, 50mm Nikkor lens which goes for around $50-$70. It is a manual lens hence no EXIF data, so we do our own calibration on these lenses.

Hey Lee,

Your image does look super crisp.
We'll be testing d3200 at f13 soon with Flash and see how it performs.

But, my main point was that there are multiple ways of capturing sharpness. It no longer is a factor of one single camera, but of all cameras working in conjunction to produce the best result.

I look at it as a balancing act.
The problems we're solving are sort of different it seems.

For those who are at 40 cams or less the main issue is the number of angles we can capture. Cause what good is a sharp camera when you have to rotate the subject and it moves accidently. The geometry build screws up and all the sharpness of a few cameras will be smudged by this one flop.

The subject is approximately 1.3m from the sensor.

I can't really tell which lens it is exactly(we have a few of them, all manual btw), since me and my partner have rearranged the setup, but we're getting pretty consistent results, well maybe except for the slight color tint cause of older lenses and weather conditions they've been kept in.

Also, we're using constant fluorescent light source, no flashing... also notice that crazy long exposure time. :)

I'm not sure how valid the sharpness argument is for Photogrammetry, since the more cheaper cameras you have the closer you can move them in and get that "missing" sharpness-detail back, as well as, capture more parallax angles while at it. Unless the minimum focus distance starts getting in your way.

Hey all,

I've been following this thread and looked at some sample images coming out of D3200. They do look pretty soft. But don't really represent the quality i'm getting.

I'm attaching another sample image that was taken with d3200, with the following settings: ISO100, f8, 1/25th, Sharpening OFF, Prime lens.

About sharpening. As far as I understand SIFT algorithm does its own sharpening, and a whole array of other post processing manipulations in order to find proper neighbouring edges/pixels. So, leaving this job to camera or other software may actually reduce build quality.

Now, I do need more light in there, but this should only make things better.

Ruslan Vasylev

Pages: 1 2 [3] 4