Agisoft Metashape
Agisoft Metashape => Feature Requests => Topic started by: Infinite on April 25, 2012, 04:57:26 PM
-
I have already spoken to Alexey about this idea. But wanted to get it out there for others to ponder.
Automasking. A technique of using a set of images empty and a set with your subject in, full. If you think about it you can Mask between the 2 automatically as a difference. With or without.
I hope we can see something like this introduced into Agisoft. Alexey seems interested in it but wonder what others would think of how it could be introduced into the pipeline? perhaps as a tick box and input line field to point to the sessions empty set of images folder to process with at Align or Build stage? This would be ideal for video sequences and head or body captures, when using allot of images.
(http://www.ir-ltd.net/uploads/mask-example2.jpg)
The idea of then being able to apply this in batch on dozens of images during Align or Geometry build, without the need to mask by hand or use a Chroma Key background with colour spill is an appealing one! at the moment my captures suffer allot with background objects (blobs) getting introduced in the final build model if not masked.
You can of course do Difference Matte in Aftereffects or Difference filter in Photoshop by hand but the results are a little tricky to get right but the idea is sound and it is rather time consuming!
If implemented it would then just be a matter of common practice to always take an empty set of images at the start of a capture session :)
Interested in a discussion.
-
Well that would be an awesome feature..
+1
f/
-
I think this is a great idea, not only for people, but for artifacts as well. If you could automask like this, then you would not need to move around the object, saving a lot of time.
+1 from me :)
-
Hello Infinite and Alexey!
+1.
Your option is quite a good idea for still-standing cameras. If the camera-position is moving, then Alexey has to program a routine which will align images (by background).
I throw another idea of automasking into this thread for discussion:
Automasking by point-cloud and region-box ...here shown in "hand-made mode"!
- Calculate a point-cloud (10000 points and more) without a mask - the cameras are usually better aligned in my eye.
- If you view an specific foto and right click on it for "reset view" you will see through the calculated camera.
- Make a screenshot for the imagesoftware of your choice.
- Colorize the points (ONLY those which are inside the region-box) into white and everything else into black.
- Duplicate the image and blur it (bigger blur for ~10000 points and smaller blur for details or 30000+ points).
- Tweak the luminance at your choice and turn that greyscale-image picture into a black&white one.
- Make that generated mask a little bit smaller (big as the blur in correspondence to the luminance-settings)
- Voila?!
If that could be done automatically (one click on a button called "Automask by PC/RB"!) it would be less work in most cases!
Greetings
tezen
-
Hi Lee,
Can see where you are going with this...and indeed it would be very handy ;)
would love to see it implemented
Cheers,
Merry
-
Your idea sounds smart and I'd like to have a tool for this too, but having said that, I find this request too much focused on an specific need which may not be of general use. I guess it could be done outside (Photoshop macro could fit) if you managed to generate mask files suitable to be imported into Photoscan project and applied to photos by means of scripting.
regards,
Jos?
-
Your idea sounds smart and I'd like to have a tool for this too, but having said that, I find this request too much focused on an specific need which may not be of general use. I guess it could be done outside (Photoshop macro could fit) if you managed to generate mask files suitable to be imported into Photoscan project and applied to photos by means of scripting.
regards,
Jos?
How is it a specific need? It's a general workflow that could work as a pipeline rule, always. Masking images both vastly improves Geometry Build and Texture quality across the board (saying this from experience). Masking and importing by hand is both time consuming and intensive, especialy if you have to work with bloated .png's or .tiff's.
This workflow would be ideal for face capture, full body capture, props and model/miniature capture. For both "3D" and "4D" input.
-
I know your work, and I sincerely admire it.
but I still think that your case is not that common. I might have misunderstood something but
these are its specifics to my understanding.
1. You have more than one camera. (how could the more common single camera user do the trick of taking all photos with and without subject?).
2. Your subject is not only separated from the background, but also removable.
3. You would probably like to mask thousands of photos that share the same attitude and background to subtract. It is indeed a reasonable desire because it sounds boring... but still rather specific imho.
I wish I could say I have the same needs as you because that would mean I have your flock of cameras and your magic hopefully.
Let me point that in your setup there are interesting invariants that could be exploited if photoscan allowed that, and here (again imho) it could make a little more sense to ask for such a feature. As long as your setup is fixed in all aspects but the subject. Why the need of calculating external parameters for each set of simultaneous photos. and why not fix the bounding box too. You should be allowed to do it just once in that cases and go directly to dsm phase.
For automating masking, let me think aloud about Tezen's approach.
suppose you have the sparse cloud, then you can resize and orient the bounding box to fit your subject... back-project the 3D points inside the box to all images and think about this new planar cloud onto your images. how could we convert that cloud into a mask?
Tezen suggest a trick that works as if those pixels were binarized and expanded until they fill a certain region but that would probably exceed subject's boundaries (which is bad) and leave holes where little feature points were found... we need a better hint.
Convex-hull is my first bid
Here starts the game: share your thoughts
my kindest regards
-
I know your work, and I sincerely admire it.
but I still think that your case is not that common. I might have misunderstood something but
these are its specifics to my understanding.
1. You have more than one camera. (how could the more common single camera user do the trick of taking all photos with and without subject?).
2. Your subject is not only separated from the background, but also removable.
3. You would probably like to mask thousands of photos that share the same attitude and background to subtract. It is indeed a reasonable desire because it sounds boring... but still rather specific imho.
I wish I could say I have the same needs as you because that would mean I have your flock of cameras and your magic hopefully.
Let me point that in your setup there are interesting invariants that could be exploited if photoscan allowed that, and here (again imho) it could make a little more sense to ask for such a feature. As long as your setup is fixed in all aspects but the subject. Why the need of calculating external parameters for each set of simultaneous photos. and why not fix the bounding box too. You should be allowed to do it just once in that cases and go directly to dsm phase.
For automating masking, let me think aloud about Tezen's approach.
suppose you have the sparse cloud, then you can resize and orient the bounding box to fit your subject... back-project the 3D points inside the box to all images and think about this new planar cloud onto your images. how could we convert that cloud into a mask?
Tezen suggest a trick that works as if those pixels were binarized and expanded until they fill a certain region but that would probably exceed subject's boundaries (which is bad) and leave holes where little feature points were found... we need a better hint.
Convex-hull is my first bid
Here starts the game: share your thoughts
my kindest regards
1. If the user, who has only 1 Camera uses a white, black or colour background the same thing could be useful. A way to automate the masking procedure. Although this could be done with a python script as well in Agisoft.
All this could be automated, to a point, outside Agisoft but it is not trivial for allot of images and damn hard work, not just boring but physically and mentally demanding. For example I have recently just finished processing 270 expressions of 3 actors for a client. The task is taxing to say the least.
I've been contacted by 5 companies in the last few weeks who are talking about their own multi-camera system with agisoft and how they can improve their workflow. I think you will find multi camera setups will start to become the norm for most people and companies who are serious about capture. Even if it's just a stereo pair, it's still useful. 1 Shot with subject, 1 shot without. Even if they Chroma key, this idea would still work in that scenario. Rotate and repeat. Automask.
I can understand for users with 1 camera they may not need it but for license payers who have shelled out for the PRO version, like myself, it would be ideal. Thanks to Alexey's super hard work Agisoft is already improving greatly even with the last few upgrades thanks to his innovation. This feature along with 4D processing would place Agisoft into another class of production proven software.
The system here is not always fixed. Automasking would shave off hours, possibly days of manually labour. Per Pose!
-
Hello!
Now here are the four ideas of this thread:
1. Masking by greenscreen (black, white or colored background)
2. Masking by background (only for static cams)
3a. Masking by pointcloud & bounding-box
3b. ...with a convex-hull
Let me throw in one more idea:
Masking by Focus. Like allmost every cam got a algorythm for Auto-Focus the same algorythm could be used to seperate the areas with fine details and that blurry ones. Maybe only for generating a convex-hull around it.
Different automasking-options for different situations would be good.
Greetings
tezen
-
Hello Alexey.... are you working on a solution...? 8)
thanks
f/
-
Hello Alexey.... are you working on a solution...? 8)
thanks
f/
It's already been implemented, I've been lucky enough to test it, it works amazingly well! soon to be released in the next builds I think. Not sure it it's Pro or Standard. The AgiSoft guys can say.
-
Hello Lee... thank you very much for the excellent news !... Can't wait to test it.. 8)
-
btw, maybe just a little question if you don't mind... what is the solution used...? the first one you purposed .?
Thanks
f/
-
btw, maybe just a little question if you don't mind... what is the solution used...? the first one you purposed .?
Thanks
f/
Yes the first one, Difference Masking.
-
Very cool Method I think.. thanks Lee.. :)
-
Hello Foodman,
Next update will have masking-by-background option available.
However this solution is designed only for static cameras it could be also used as "blue-screen" background tool, if the same one-colored image is used for all the cameras. A little bit heavy so far, but could be used until color-based masking will be implemented.
-
Thank you Alexey for clarifying..
indeed this looks like what I need as I use a fix camera, well sometimes a bit of turntable is seen.. will this affect the auto masking...? I guess so... mmmm.. what about if it's a full white ..
f/
-
Hello all,
In the build 1571 masking-from-model feature was implemented.
This option could be used to create rough masks for objects with complex background based on low-quality model.
-
Hello all,
In the build 1571 masking-from-model feature was implemented.
This option could be used to create rough masks for objects with complex background based on low-quality model.
WOW! This feature is just superb! Thank you Alexey 8)
-
I am just trying to find this feature now. Where is the masking from model feature located ?
-
@glennn
Masking-from-model is located at tools>import>import masks and there you?ve to choose "from model".
THX 2 AGI !
-
how long is 'masking from model' taking for everyone?
for me it seems like > 10 hours from a point cloud quality model (~400 photos) with hexacore i7
the area selected in the model should only be visible in around 100 photos
are there any particular factors that control how long this process takes?
e.g. number of photos in project, complexity of area selected in model, number of polygons
-
how long is 'masking from model' taking for everyone?
for me it seems like > 10 hours from a point cloud quality model (~400 photos) with hexacore i7
the area selected in the model should only be visible in around 100 photos
are there any particular factors that control how long this process takes?
e.g. number of photos in project, complexity of area selected in model, number of polygons
Try and build low or medium mesh, you don't need super detailed. You wont always get perfect masks but it might help.
-
@ajg-cal
Masking from model is superfast for me:
Less than 10 seconds for ~40 Photos (@18Mpix) and a model with ~1 million polys.
Less than 40 Minutes for ~400 Photos (@8Mpix) and a model with ~3 million polys.
...and my PC isn?t good as yours (i5 2500k).
EDIT: The project with ~400 Photos and one very simple lowpoly-Model (1400 Polys) and took less than 5 minutes. So the speed depends much on the polycount. You could decimate allmost every pointcloud, low(est)poly or medium generated mesh down to 1 million polys without noticeable differences at masking-generation.
-
thanks for the helpful replies Infinite and tezen :)
I shall try this now
-
it seems to be mainly related to number of faces in model, although I'm still getting some strange results...
The masks created are totally wrong, not just a little approximate. Entirely different areas are masked out. I shall build a project from scratch and test again.
thanks again for the help
EDIT
hmm maybe not just faces but bounding box size? If I make two models, each decimated to 500 000, but with different bounding box sizes, the processing time for mask from model is radically different.
I think I'll email support...
-
it seems to be mainly related to number of faces in model, although I'm still getting some strange results...
The masks created are totally wrong, not just a little approximate. Entirely different areas are masked out. I shall build a project from scratch and test again.
thanks again for the help
EDIT
hmm maybe not just faces but bounding box size? If I make two models, each decimated to 500 000, but with different bounding box sizes, the processing time for mask from model is radically different.
I think I'll email support...
No one can really help without seeing examples of what you are doing.
-
WOW !
That works a treat !!!
Legendary effort that's saved me hours of work!