Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - tezen

Pages: 1 2 [3] 4 5
31
General / Re: Upside Down
« on: April 11, 2013, 12:28:06 AM »
Hello Nevalguijo!

The answer is quite simple:

Autodesks 3D-software uses the Y-axis is in the opposite direction of PhotoScans Y-axis.
...it's equal to ZBrushs "problem" that you have to flip texture-maps everytime. I know what you hate XD...

@Alexey&Agi: It would be nice if PhotoScan could show up a simple groundplane like in any other 3D-software and that your users could rotate the model (or point-data incl. cameras) in any direction. I don't know if it's possible in the Pro-Version.

Greetings
tezen

32
+1

...I think this improvement is "an easy one" - and a shortcut would be nice too...

33
Feature Requests / Re: photoscan and spherical panoramic images
« on: February 27, 2013, 01:32:10 PM »
@ethmthree&all:
I`m also in no position to buy a Pro-License. But you can do it manually in Standard-Version:
- Align your photos and generate a (LowPoly-) model
- Load that model into another 3D-software and change that LowPoly-Model to a uv-mapped sphere, reorientate to the horizon and upsize it quite high
- Import that sphere into PhotoScan and generate a texture (mosaic-blending and keep uv on)
P.S.: I?d told Agi about this wonderful sideeffect of PhotoScan some time ago - they should implent this automatic one into standard-version too (I pray :D ) or sell this feature via a "medium" version. It?s quite good for generating highly detailed HDRs in fast time!

34
General / Re: [Help]Any tips on scanning a model in her wedding dress
« on: February 23, 2013, 04:41:17 PM »
Hey Koyko4!

This is very hard  and quite expensive too to do.
You need a full-camera-rig (~48 - ask or read "infinite" posts) and some beamers. With those you can project a pattern of light on the white dress/object. You?ve to take 2 shots with that rig. One with projections "on" and one with good/normal lightsources. The women/object shouldn?t move any mm between those two shots. The geometry-calculation by PS has to be done with the pattern-projected photos and the texturing has to be done with the good/normal light photos.

Greetings
tezen

35
Hey Jinny!
At last you used really small photos at different sizes (down to 3.5megapixels) with a 17mm-lens at ISO 2000 or higher - all this could cause problems. Your photos should be 5mpix (or higher - 8-12mpix is best for speed+details) within 20mm (24mm or 36mm - less lens correction) at ISO800 for less noise (better is ISO100 with good bright studiolights).
Happy scanning!
tezen

36
+1

...it would be nice if you could also scale those thumbnails by a bar - instead of just turning cameras on/off.

37
Feature Requests / Re: RAM estimation
« on: January 27, 2013, 07:09:25 PM »
+1

Is there any way to calculate besides the memory requirements ( http://www.agisoft.ru/wiki/PhotoScan/Tips_and_Tricks#Memory_Requirements ) the memory consumption of the bounding box at the build-geometry-process? That would be a really good improvement for users without lots of RAM or those who split a chunks into parts.

38
Feature Requests / Re: Cut and trim by masks
« on: November 27, 2012, 08:45:56 PM »
Hey!

...as described above it would be nice to have a new reconstruction-method called "mask" (besides abituary, heightfield and pointcloid). Like ZBrushes shadowbox in more than 3 directions and within perspective distortion. The result should look like basic carved wood and even much better than low-poly or pointcloud-reconstruction. You could use that wood-like-object for boolean-operations inside other software (@Infinite&Mr.Curious: ...and you only have to do it once per object - by macroscript inside ZBrush ;) ). But thats not automatic enough!

Measuring distances between an high-abituary-reconstruction and that object could be used for calculating critical zones (=inner edges - I hope you know what I mean!). As the distance increases the abituary-reconstruced points should move in their own Z-direction down (or up) to the wood-like-object. For correct shapes (non-overlapping) the normals of those points have to be smoothened by their neighbors.

The problem with the critical zones (=inner edges) depends mostly on the smooth-algorythm (beneath lightning) I think. Using a sharp-algorythm most polys stay in the right position. Maybe a mix between the sharp and the smooth algorythm will be the solution (sharp at zones with lots of 3D-information and smooth at zones without many info and/or lots of holes)? It would increase the details alot. Look for yourself and take a test with photos of your choice. You could use the same generated depthmaps (BTW: In Batch-mode theres a missing option for turning off rebuild depthmaps).

A mix between mild and aggressive depth filtering (aggressive at zones with lots of 3D-info and mild at zones without) would increase details at pre-processing-stage? Dont know! But individual settings depending on the objects area (data-ammount, edgy or soft surface etc.) instead of static settings (p1,p2 which are now calling mild,moderate&aggressive or sharp (=0) and smooth (=1 - gaussian algorythm?)) could increase the quality of the result. These are just ideas - maybe someone is inspired by these!

Greetings
tezen

39
Feature Requests / Re: Cut and trim by masks
« on: November 25, 2012, 02:25:14 PM »
Hello!

All in all its "just" a boolean-operation, which you could do manually in other software:

1. Generate a polyplane from the non-masked-area and thicken it to become a polygon. F.E. in ZBrush you can  "make 3d" out of an black&white-alpha with settings at MRes 512 or higher and minimal smooth + MDep .
2. That polygon have to face/rotate/positioned onto the camera where the mask is coming from. You could do it in any modelling program which is able to load the camera-positions calculatet by PS.
3. Resize that polygon in its own Z-direction until it hits nearly the middlepoint of the camera and covers the whole object.
4. Resize the polygons Z-direction-ends in their own X&Y-axis until they match the 2D-mask of the camera.
5. Do a boolean-operation to cut the polygon out of your PS-mesh.

In theory it seems to be relative simple. But its not easy to write some sort of software for that process (especially boolean-operations on multimillion-poly-meshes).

Greetings
tezen

40
Feature Requests / Re: Cut and trim by masks
« on: November 22, 2012, 05:16:17 PM »
+1
...that could improve abituary model generation a lot!

41
General / Re: unwanted noise on objects
« on: November 22, 2012, 12:39:40 PM »
+1
Infinites idea is really good to generate a model out of masked areas (of choosen photos) and to use that as a cutter. It could be generated real fast because and without depthinformation/maps its equal to ZBrushes shadowbox (with more than 3 sides). Even better would it be that PS just move the polys/vertexes inside a masked area out to the borders of a masked area (on one or more choosen photos). That would prevent holes. The negative thing is that some surfacedetails will be lost.

@JozefP
Just looked at your project. For objects like that it will help to take some nearby photos of those holes. In most cases you will get a more acceptable result. Some spots on the floor for better lightning should help too. I?ve recognized that darker areas are a little bit problematic.

42
Feature Requests / Re: Automatic Masking.
« on: November 22, 2012, 11:37:39 AM »
@ajg-cal
Masking from model is superfast for me:
Less than 10 seconds for ~40 Photos (@18Mpix) and a model with ~1 million polys.
Less than 40 Minutes for ~400 Photos (@8Mpix) and a model with ~3 million polys.
...and my PC isn?t good as yours (i5 2500k).

EDIT: The project with ~400 Photos and one very simple lowpoly-Model (1400 Polys) and took less than 5 minutes. So the speed depends much on the polycount. You could decimate allmost every pointcloud, low(est)poly or medium generated mesh down to 1 million polys without noticeable differences at masking-generation.

43
Feature Requests / Re: Automatic Masking.
« on: November 20, 2012, 04:09:43 PM »
@glennn
Masking-from-model is located at tools>import>import masks and there you?ve to choose "from model".

THX 2 AGI !

44
Feature Requests / Re: Extremely large projects 1000+ photos
« on: November 18, 2012, 05:34:04 PM »
+1

Besides an "Bounding-Box-Splitter" (like split bounding-box a few times on x, y or z-axis) it would be awesome if PhotoScan would be able to manually create multiple bounding-boxes inside one chunk. Those should be able to generate multiple 3D-Models with different UV-Maps (instead of one big Model and one big texture) which could improve the texturing&UV-problems which are happen to multimillion-faces objects. Btw: PhotoScan does a good job on automatic UV-unwrapping but you?ve to retopologize the models for very clean UVs imho.

If an "Automatic Bounding-Box-Splitter" would be implemented into the Model-Generating-Algorythms it would speeden up PhotoScan many times as RHenriques mentioned.
F. E.: A relative quadratic bounding-box around a pointcloud of a statue could be splitted into 1000 boxes (10*10*10). Boxes without any pointcloud-data nearby could be deleted (automatically) - usually more than half (up to 9/10 at objects like statues) of one bounding-box is covered without any data. The calculation of 100-500 really small bounding-boxes should be veeery fast even in ultra-high-quality. To weld the border-seams of those boxes shouldn?t be the biggest problem on those triangulated meshes depending on the same depth-maps. One side-effect is to use every box for UV-Generating. That will end up in many seams&isles on the UV-Map but the texture-space would be used better. It would be a little bit like PTex. And there won?t be any overlapping issues because of the smaller polygon-count of every box. For an one-million-object with 100 boxes the polygon-count of every box is ~10000.

One step further: With a very lowpoly-creation or decimation of those boxes-data (let?s say about max. 6 polygons per box) you could generate a simple retopology-mesh. Because of the orientation (same direction and in this statue-example allmost quadratic) of the multiple boxes it would result in a mesh looking like Dynamesh (ZBrush).
Two steps further: If PhotoScan could handle subdivision surfaces and displacements-maps (I requested it sometime before in this forum) in combination with simple LOD (switch between HiPoly and LowPoly-View of each box) this could be a solution for aerial photogrammetery of very large areas inside one chunk.
Three steps further: The techniques described above in combination with the Animation-feature (PhotoScanPro), an uv-mapped static object (in this example a polygon plane with 600 faces = 6 polygons multilplied by ~100 boxes), the MDD-Animation-Fileformat*, texture & displacementmap-sequences would generate 3D-animations in HiQuality without the huge amount of memory used right now. *Via MDD you could export liquids out of Blender.

But I`m still dreaming and it?s amazed by the increasing speed of computers and whats possible right now.

Greetings
tezen

45
Feature Requests / Re: Up/Down buttons to re-order, arrange Workspace.
« on: November 17, 2012, 02:11:40 PM »
+1

Pages: 1 2 [3] 4 5