Forum

Author Topic: What is the method if I have a second set of photos of underside of object ?  (Read 7599 times)

Steve003

  • Full Member
  • ***
  • Posts: 169
    • View Profile
Hi,
What is the correct procedure when an object needs turning upside down ?

I photographed around the object with it the right way up.
I turned it upside down to photo it again showing its underside.
I have put the photos with table and floor (unmasked) into the program , I get two objects, I also see the table and floor, I can delete the unwanted table and floor out in the mesh or point cloud.

Does one create a project per set of photos or a chunk per set of photos ?

I took the photos of the underside in chunk1 and made a chunk2 with them, then redid align photos on chunk 1, and got a single object. I then selected chunk 2 highlighted it and ran align photos but still see point cloud of the object upright, how do I get to see the object for chunk 2, I click on chunk 1 or chunk 2 but the object in 3D view remains same.

I expected it to show the object from chunk2 pics if chunk2 was clicked on.

If I then merge chunks does the program bring the object together adding the underside mesh to the rest of it or do I end up with 2x the mesh I had before ?

I fear that if I merge chunks, and Agisoft matches the table top and floor, I will end up with two objects with a common floor again.

I need to have Agisoft make the two models, then I delete the points for the floor from each, then bring the two models together. I havent the time to go masking lots of photos, I have 5000 photos of a whole load of objects and its far quicker to delete the points !

How is this bringing of two models together done after I have deleted out the unwanted surroundings ?

Steve
« Last Edit: June 22, 2019, 08:31:45 PM by Steve003 »

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 14890
    • View Profile
Hello Steve,

If you have the possibility to apply the masks to all the images (for example, using From Background option) then just align the photos in the single chunk using Apply Masks to Key Points option.

Alternative way would be masking the static background just on a few images and using Apply Masks to Tie Points approach:
https://www.agisoft.com/index.php?id=49

Usually alignment of the cameras in the single chunk gives better accuracy and quality of the results, rather then alignment of the multiple chunks.
Best regards,
Alexey Pasumansky,
Agisoft LLC

Steve003

  • Full Member
  • ***
  • Posts: 169
    • View Profile
Hi,
Quote
Usually alignment of the cameras in the single chunk gives better accuracy and quality of the results, rather then alignment of the multiple chunks.
You are saying try to avoid multiple chunks, mask out all the backgrounds, and put all photos into one chunk, Agisoft works better.
This will also avoid the need to use markers, which are denied us, to bring together model from two chunks.

I find that if I have an object, as I have, of a semicircle of metal resting on its edge on a gridded table, at the point of contact, when mesh made, I get a 'snowdrift', I see and know its a right angle, but photos down at it dont tell agisoft that, if I then were to photo that junction by moving along it like driving along near to a curving wall, I suppose those photos help avoid the 'snowdrift' mesh.

If I mask the table out right up to the object that might help avoid the 'snowdrift' parts of the mesh.

When one has 90 photos of an object with it being turned upside down, rested on its edge etc so as to get good shots of all parts of it, to avoid the snowdrift effect,  I hope the suggested new tool in photoscan 1.4. onwards will save me time. I have bought metashape which doesnt run and run 1.4.6 but have now installed 1.2.6 as it performs better with photos I have, until you and the team tell me what is happening to stop the later versions giving as good a result I cant use the masks to tie points command.


Can I have both 1.2.6 and 1.4.6 installed and running at same time ?

Steve

Mak11

  • Sr. Member
  • ****
  • Posts: 374
    • View Profile
The methods posted by Alexey work perfectly fine (on par & sometimes better than competing software when aligning those kinds of projects) . As long as you have enough photos and of good enough quality (which is literally the basis of photogrammetry).

Mak

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 14890
    • View Profile
Hello Steve,

If you have problems with the image matching and alignment on certain datasets in the versions 1.4 and 1.5, but can properly align the cameras in the version 1.2.6, then could you please send the example of such image sets to support@agisoft.com, so that we could reproduce the difference on our side and check if anything should be fixed in the algorithms?
Best regards,
Alexey Pasumansky,
Agisoft LLC

Steve003

  • Full Member
  • ***
  • Posts: 169
    • View Profile
Hi Alexey,
 I have already done that as per your request, and sent email describing what happened in the three versions.
Then with no feedback done it again on 4 June 2019.
 I have sent another message since via yet another support ticket 4 days ago. Both original tickets were closed without me hearing anything.

I do wish to go with one program, ideally 1.4.5 so as to use ''apply masks to tie points'' as metashape wont run.
and would love to have metashape working but when it did work it made a complete mess of my exhaust stack.

I sent the photos over with wetransfer 4 June 2019 and they were downloaded by agisoft support but heard nothing since.

Steve

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 14890
    • View Profile
Hello Steve,

OK, thanks, I'll check the status of your tickets and the provided data.
Best regards,
Alexey Pasumansky,
Agisoft LLC

Steve003

  • Full Member
  • ***
  • Posts: 169
    • View Profile
Hi,
Its taken me ages to mask up two photos of those in chunk 1. No way could I ever mask up my 2000 this  way.

I have imported the two masks for chunk 1 using the correct {name}.bmp approach, I see the two masks in mask view, I then run align photos selecting the settings as per the blog on this, i.e accuracy high generic preselection tick key point 40,000  tie point limit 10,000 apply masks to tie points adaptive camera tick and hit ok.

How do I know this has worked, I still have just 2 of 15 photos showing a mask in mask view.  :'(

Steve

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 14890
    • View Profile
Hello Steve,

According to the provided data the behavior of the Apply Masks to Tie Points is expected. This feature will not create any new masks, it is just meant to avoid any tie points being created beyond the masked areas. And in this small sample project there are no tie points related to the areas that you have masked on two photos, but in the background that wasn't present on the masked images, the tie points could appear and they form the additional sparse cloud elements, for example, related to the floor.

Below I am suggesting possible approaches to the image acquisition / data processing scenarios for the similar cases, where it is necessary to get the complete 3D model of the rather small object (that can be rotated and lifted):

1. Initially working with multiple chunks for the same object, but inside each chunk the object does not move relatively to the background - align the images in each chunk individually, then create rough model (from the lowest quality dense cloud, for example) for the limited area of interest (reduce the bounding box to your object), roughly cut unwanted polygons that are not related to the object. Then use Import Masks -> From Model option. This operation would apply mask to every aligned camera according to the rough mesh model. Repeat for each chunk, then merge chunks and re-align the complete image set from scratch using Apply Masks to Key Points option.
After that build dense cloud or mesh.

2. If you are shooting the object placed on the fixed background (table on the concrete floor), but the object location may vary on the table during the shooting. Then take additional images of the "bare" background - the table without the object on it from a few locations similar to those that would be used for the real object acquisition, but could be from a little bit distant positions in order to cover more area. Include those images to the processing set, but apply the masks for them as a complete image (fully cover the image with the mask - should be quite easy and fast, can be even imported from the completely black image of the same size, using From File option for the Import Masks command). After the masks to the bare background images are applied, run Align Photos operation using Apply Masks to Tie Points option. In this case you should get the tie points detected only on the object's surface. This approach is similar to the one mentioned in the first case of the related tutorial:
https://www.agisoft.com/index.php?id=49

3. Requires more efforts to the image acquisition. Put the object of interest in front of the camera fixed rigidly relatively to the background and rotate the object in front of it (so-called turn-table scenario), then take image from the same camera location of the empty scene. If necessary, repeat the operation for another camera location to capture the object from the different angle of view. But for every camera location take an image of the background. After the shooting session is finished, use Import Masks -> From Background option for each subset of the images, selecting the corresponding photo of the background for them. In this case PhotoScan / Metashape will automatically create the mask for the background, keeping only the object unmasked.
Then process as usual using Apply Masks to Key Points option.
Following this approach it is necessary to avoid considerable shakes of the camera (recommended tripod mount and remote shutter tool), difference in the lighting conditions for the sets with the object and corresponding background photo and also to have the background color rather contrast to the object surface colors to avoid confusion of the automatic masking creation.

Hope it is helpful.
Best regards,
Alexey Pasumansky,
Agisoft LLC

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 14890
    • View Profile
Just came across the following tutorial that may be also helpful:
https://bertrand-benoit.com/blog/the-poor-mans-guide-to-photogrammetry/
Rather similar to the option 1 from the previous post.
Best regards,
Alexey Pasumansky,
Agisoft LLC

Steve003

  • Full Member
  • ***
  • Posts: 169
    • View Profile
Hi,
nice article but he doesnt seem to cover my scenario of object not on turntable with me moving around the object then moving it and repeating the walkaround.

 I am writing up a step by step for different shooting scenarios, which I will then post here, hopefully it can be made a sticky so its always seen at top of forum, as other forums do. I have sent it to you for approval but I am seeking clarification on method 1 now, see your emails.

Initially focusing on METHOD D for the above situation.  I think it also needs altering  as using your method 2 it mentioned the requirement for a shot of the background without the object, that's not what was done or was possible in my sessions.

I have tried to Incorporate what David Cockey in another thread wrote.

Quote
I frequently use Mask from Model in situations where the background is moving relative to the object by a small amount, such as a boat tied to a dock or an object which was bumped while being photographed. I align photos, create a dense point cloud and build an initial mesh without masking. Masks are then created using the initial mesh. The masked photos are re-aligned. The masks are then removed and a new dense point cloud is created and mesh built.

So I now have my Method D as follows, but I am unsure of step 10 onwards.

METHOD  D.
Photos taken moving around  the object (object not on turntable) . Background may move during the overall photoshoot.
For object deliberately moved after one photo sequence, e.g to photo its underside, or items in background change.
1st sequence object upright, 2nd sequence object placed on its side, 3rd sequence object placed upside down.

1 ) Create chunk per sequence
2) Align Photos in each chunk.
3) reduce the bounding box  in each chunk  to fit just your model. (use Model>Show/Hide Items>show region as well as region resize tools)
4) create rough model (from the lowest quality dense cloud, for example) for the limited area of interest ,
5) create a mesh.
6) roughly cut unwanted polygons that are not related to the object.
7) Then use Import Masks -> From Model option. This operation would apply mask to every aligned camera according to the rough mesh model.
8 Repeat Import Masks -> From Model option for each chunk,
9) re align the masked photos
10) remove the masks (HOW IS THAT DONE ?)
11) then merge chunks
12) then re-align the complete image set from scratch using Apply Masks to Key Points option.
13)  After that build dense cloud or mesh.

I did have...
10) then merge chunks
11) then re-align the complete image set from scratch using Apply Masks to Key Points option.
12)  After that build dense cloud or mesh.

Can someone correct these steps for me.

and tell me what to add to step 10 as I am not sure how masks are removed or even why !

I have gone for separate chunks as when I put all the pics into one chunk I ended up with two models on the same gridded baseboard, the right way up and the upside down one and couldnt then place a region around it as it was not one model. If the background had been masked then that wouldnt have happened, but this method is about making the mask, a catch22 situation !

Steve
« Last Edit: July 01, 2019, 04:39:08 PM by Steve003 »