Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - JohnyJoe

Pages: [1] 2

Just a quick question, Agisoft offers Delighting in Metashape and in standalone Agisoft Delighter, how does it offer with other Delighting methods that are there...? Mainly i know (and talking about) Delighting in Unity engine and delighting using the Substance Alchemist aplication... ARe they more or less the same or is one significantly better/worse?



I would like to know whats the difference between the standalone (free) aplication "Agisoft Delighter" and the "in-built" tool/feature named "Remove Lighting" in Agisoft Metashape...?
Both "tools" seem quite different (in agisoft delighter you have to directly pain in the model), in agisoft metashape "Remove Lighting" feature you dont have to paint on the model (or to be precise- you CANNOT even if you wanted)).

So... What are the differences? Why have two "different tools", why not include delighter "painting" directly in agisoft metashape please?

Thank you

In Agisoft metashape (photoscan), is there a way how to clean a mess of (not just) floating points? After generating dense point cloud you get a lot of "mess points" (pictures below) - some are floating, some are "connected" to the main shape... These are "garbage points", is there some "automated" solution/feature of how to get rid of them...? Sure i do it by hand but it costs a lot of time... Any feature/worfklow/function in agisoft how to get rid of them quickly/automaticaly...? (at least partially)...?
If agisoft doesnt have such feature (in which version?) is there some other software that can clean this easily?


I have camera Nikon D3200 with default kit lens (18-55mm), i useto scan smaller objects (like around 20x20 cm for example) (nothing TINY (insect) nothing HUGE).
I shoot around 50-150 photos per scan lets say. I use studio lights (cheap) and a cheap tripod.

I have NEVER done any camera calibration... I dont know anything about it

My question is simple - how much does camera calibration help in regards to increasing the final quality of the scan? (the dense point cloud, the resulting mesh, the texture)...?

THank you

Hello, i have been using some old version of agisoft metashape, now im using some newer but still older version (than the current one is). Its 1.5.5.

With this "newer but still old version" i got new error messages pop up here there, these didnt pop up EVER in previous versions.

They say:

Warning! The following problems can lead to suboptimal results:

- Images with variable zoom are added to the project. It is strongly recommended to avoid zooming as much as possible.

Happens when i upload images with "zoom" (or different Focal lenght in other words).

I have ALWAYS done this and never had this warning pop up before... DOes using different Focal Lenght ("zoom") really negativelly affect the final quality of the scan...? I have never noticed that...? IS that true? Or is that true only with never versions of metashape and older could handle it and newer give worse results? Or is it in all photogrammetry software that using different focal length leads to bad results...? I would swear i did some tests with this back in 2015 (or whatever) when i started with 3D scanning and never saw bad results...?

Is it worse? How much worse is it...? Zoom is sometimes necessary i feel... Should i really avoid it like the plague?

And another one:

Warning! The following problems can lead to suboptimal results:

- Images with different orientation are added to the project. It is strongly recommended to disable auto rotation in the camera or photo processing software.

Again...? Does different rotation matter? (in this example i scanned some leaf and few photos had the same leaf, the same side of the leaf, just aprx. 90 degree rotated...

I have dont this before i think and no problem. No it gives me this warning and also the point cloud looks weird... (like wrong, "normals" flipped).

So how is?

    Is using different Focal length really THAT BAD? Should i avoid it at all cost...? (any pictures of tests for this...?)

    Is using different rotation (mainly switching holding the camera verticaly or horizonatally; but even rotations like 30% only for example)... Is this also BAD...? How MUCH bad... should be avoided at ALL COSTS...?

(camera i use: Nikon D3100)

General / Performance diff. GTX 1060 vs GTX 1070 in photoscan...?
« on: February 28, 2017, 06:42:20 PM »
Does anybody have an idea or actual numbers, whats the speed improvement in photoscan when using GTX 1060 6 GB vs GTX 1070 8 GB?

Thank you

General / What software for cleaning Dense Point Cloud?
« on: October 20, 2016, 04:11:47 PM »
When one generates dense point cloud and there are a lot of points that need to be deleted (the whole point cloud "cleaned"). some points are out of the actual scanned object but some are near or even connected with the point clouds of the actual scaned object.
One can "clean it" in agisoft, but i believe that there are far more capable software for cleaning that the capable with agisoft (can import a large ammount of points from agisoft and then after cleaning export them back into agisoft).

What software would you recomend for this?

General / Actual Difference On Mesh "Normal" VS fixed Lens (Nikon)?
« on: October 16, 2016, 09:40:21 PM »
Does anybody done (and i believe someone had to) and has info and even better pictures of actual real differences on generated mesh/model when using "standard", "default" "adjustable" lenses that come standardly with your DSLR camera (18-55mm lens that came with my Nikon D3200 for example) in comparison with FIXED lenses?

I know there is difference when looking on the pictures themselves but i have never actually seen any real test/comparison of meshes of the same model that was captured once with the default lens (18-55mm, at 35mm for example) and once with fixed (35mm lens)?

Is there any actual difference on the mesh? Is it big? Did somebody did this test or have some pictures?

Python and Java API / Automatic (batch) export of model with texture?
« on: October 15, 2016, 10:07:39 PM »
Hello i need a code/script allowing me to export automaticaly the generated model/mesh as obj with JPG (or TIF) texture. Im not a scripter but i found this script that allows a lot of batch process i didnt try it yet but i believe it should work:

Code: [Select]
import PhotoScan, os

path ="Please choose the folder with .psz files:")

print("Script started")
doc =
project_list = os.listdir(path)

for project_name in project_list:
if ".PSZ" in project_name.upper(): + "/" + project_name)
chunk = doc.chunks[0]

chunk.matchPhotos(accuracy=PhotoScan.HighAccuracy, preselection=PhotoScan.GenericPreselection)

chunk.buildModel(surface=PhotoScan.Arbitrary, interpolation=PhotoScan.EnabledInterpolation)

chunk.buildTexture(blending=PhotoScan.MosaicBlending, size=4096)
print("Processed project: " + project_name)


print("Script finished.")

Could someone write additional line that after saving the file, it will also export the mesh as obj with generated texture (tif) for each *.psz file (with the same name as the file (or some unique name).

Is this possible?

Thank you a lot :-)!

General / What programs can save *.rcs *.rcp point clouds?
« on: February 07, 2016, 02:58:53 PM »
I need a point cloud from a friend to get into my 3ds max. He uses agisoft photoscan which is able to export point clouds into many formats, but sadly none is compatible with 3ds max. 3ds Max allows to import only rcs and rcp formats which are from autodesk recap (which i do not own and it isnt free AFAIK).

So i want to convert the point clouds from one of the formats supported by agisoft photoscan in some OTHER software to rcp or rcs (agisoft cannot save directly to rcs rcp) and import that into 3ds max.

What ALL programs can save/export into rcs or rcp formats? Recap, what else? Autocad? Inventor? Something else...even free? (believe it or not, this information is hard/unable to be found even on the internet).

Hello, im playing with the idea of building my own basic 3D scan system using photogrammetry (agisoft photoscan probably). Only around 20-25 cameras. Currently it looks like i wont be able (price wise) to get the cameras to be all the same type. I will probably have to have 3 different cameras, all from the same vendor (cannon ?) and all circa in the same price range.
But besides vendor and price range, the cameras will range, in age (1st type is new, 2nd type is 3 years old, 3rd type is also 3 years old), and of course mainly in Mpx. The newer type would have 20 Mpx, the older ones (2 types)  will have 16 Mpx) . That means cca 3 types. If im Lucky i will end up with only 2 types, but still 20 Mpx vs 16 Mpx. Will this difference be problem for agisoftphotoscan? Can it handle it (mixing photos from 2-3 different camera types for one capture session (for one model))?

Will it work? Fail completely, or work but the result will be slightly innaccurate?

Hello, im thinking about building some rig for face scanning (just face scan, not full body scan, also "only" front and left right sides of head would do, i dont need the back side of head).
Could i do it with only around 25 compact point and shoot cameras each for only around 100 USD? For example this one:

1) Would this be sufficient to get a good quality face scan (front, right and left sides of head)?

2) Would this be even possible? I need to trigger all cameras at the same time, it this even possible with these 100usd compacts?

General / Anandtech- Which Sample Set they use?
« on: May 11, 2015, 09:14:27 PM »
I could not find the info:

What set of images does the site anandtech uses for their cpu testing in agisoft photoscan?

I gues its the one on the site here, but which one?:

The doll, monument, spherical panorama, coded targets or Building?

Hello, i thought about possibilities for full body 1 camera captureof humans.
1) Face done separately from body, photos taken without stative, the person must not move for about 2-3 minutes- doable.

2) but not moving for a full body doesnt seem realistical, since more photos from more angles are needed, thus i get the idea of using real person clothes on some sort of real life sized figurine or manequine..., something like this:


the problem is that these would need to be poseable, at least be able to make a T or A pose for a character (cloth withotu face) scan.

My question is simple, do you know somebody about such figurine/manequine, that would allow such posing? (T or A pose)?

General / LuxMark 2.0 similiar performance with agisoft in GPU reviews?
« on: February 19, 2015, 11:07:58 PM »
Hello, i think a lot of people are having problems with choosing a GPU for a potential running of photoscan (besides other things), so they read GPU reviews on various sites... The problem is that these reviews do not include agisoft photoscan as one of their benchmarks, but i have noticed that a lof of reviews include a benchmark called "LuxMark 2.0" which uses OpenCL (as agisoft does during dense point generation). There are a lof of tests of a lot of GPUs where you can find the LuxMark 2.0 score for each card...

My question is: Is agisoft photoscan really comparable with LuxMark 2.0? Thus if a GPU X has 150% performance over GPU Y in LuxMark 2.0, is it safe to say that GPU X will have aproximatelly also 150% performance over GPU Y in Agisoft photoscan?

(Yeah there are some benchmark results on this forum but they do not include all cards etc.)

Pages: [1] 2