Forum

Recent Posts

Pages: 1 ... 4 5 [6] 7 8 ... 10
51
Python and Java API / Re: Remove polygons from mesh
« Last post by Alexey Pasumansky on April 12, 2024, 01:11:24 PM »
Hello apicca,

To delete the faces from the model you need to make them selected and then call:
Code: [Select]
chunk.model.removeSelection()In your code the assignment model variable seems to be missing, for example, model=chunk.model

To select the polygons you can do the following, according to your code:
Code: [Select]
for face_index in range(len(chunk.model.faces)):
   if face_index in polygons_to_remove:
      chunk.model.faces(face_index.selected) = True
Or just apply face selection directly in loop, where you check, if the face fits threshold or not.
52
Python and Java API / Remove polygons from mesh
« Last post by apicca on April 12, 2024, 07:58:13 AM »
Hello,

I am trying to remove all polygons that exceed the size threshold selected.  What is the best way to delete the selected faces? Model.cropSelection(), model.removeSelection()??

I keep on getting Error: 'Metashape.Model' object has no attribute 'selectFaces'

Using Agisoft Metashape Professional 1.7.6


def calculate_triangle_area(vertex1, vertex2, vertex3):
    # Calculate the lengths of the sides of the triangle
    a = (vertex1 - vertex2).norm()
    b = (vertex2 - vertex3).norm()
    c = (vertex3 - vertex1).norm()
   
    s = (a + b + c) / 2.0
   
    # Calculate the area using Heron's formula
    area = (s * (s - a) * (s - b) * (s - c)) ** 0.5
   
    return area

# Function to filter polygons based on their size (area)

def filter_polygons(chunk, max_area_threshold):
 
    polygons_to_remove = []
   
    # Iterate over each face (polygon) in the model
    for face_index in range(len(model.faces)):
        # Get the vertices of the current polygon
        vertices = model.faces[face_index].vertices
       
        # Ensure the polygon is a triangle
        if len(vertices) == 3:
            # Extract the vertices' coordinates
            vertex1 = model.vertices[vertices[0]].coord
            vertex2 = model.vertices[vertices[1]].coord
            vertex3 = model.vertices[vertices[2]].coord
           
            # Calculate the area of the triangle
            area = calculate_triangle_area(vertex1, vertex2, vertex3)
           
            # Check if the area exceeds the specified maximum threshold
            if area > max_area_threshold:
                # Add the face index to the list of polygons to remove
                polygons_to_remove.append(face_index)

    # Remove polygons based on the indices collected
    model.eraseFaces(polygons_to_remove)


max_area_threshold = 5  # Specify your maximum area threshold here

for chunk in chunks:
    # Filter polygons for each chunk
    filter_polygons(chunk, max_area_threshold)

doc.removeSelected()


Thank you!
53
thank you for the detail explanation, according to the codes above, If I change the block size from 250 to 450, this will reduce the number of blocks from 300 to about 80-90. without reducing the pixel size of texture, this should be able to generate the texture on a 128G memory.

I'll try again. but the model at this scale, takes very long time to generate.

And another suggestion, when building the texture for blocked model, I think it's better to give a memory requirement estimation in the dialog.
54
Hello steve3d,

Reworking texturing procedure in general is already in progress, but it would like take quite a while to be implemented in the release version, so currently in order to reduce the memory consumption for the block model texture generation you need either to use lower texture resolution (so smaller number of pages per block are generated) or to reduce the size of initial model blocks.

For a very rough estimation for the number of texture pages and RAM consumption you can use the following code, but adjust the input values: ghosting filter enabled/disabled, block size, texture page size, output resolution:

Code: [Select]
K = 3 #surface complexity and atlas filling coefficient
K_ghost = 60 #60 - with ghosting filter, 36 - without ghosting filter
texture_size = 16384 #pixels
block_size = 25 #meters
resolution = 0.00075 #m/pix resolution
N_pages = int((block_size / resolution / texture_size) ** 2 * K) + 1
req_memory = texture_size ** 2 * K_ghost * N_pages
print(N_pages, "texture pages, ", req_memory / 1024 ** 3, "GB")
55
Python and Java API / Re: Access script path from inside GUI?
« Last post by Wizyza on April 11, 2024, 05:24:27 PM »
Hi everyone,

You can disregard this post. This turns out to not be a Metashape issue.

I wasn't aware of using the __file__ variable in Python. My script can now access files outside of itself.
56
Agisoft Cloud / Re: Agisoft Cloud Release Notes
« Last post by Ilya Shevelev on April 11, 2024, 05:14:43 PM »
Released on 2024-04-11

New features
  • Added limit box tool for Tiled Models and Point Clouds.

Tutorial on limit box tool is available in the following article in our knowledge base.


Full changelog is available in the following article in our knowledge base.
57
General / Re: RTK Positioning accuracy with Mavic 3E
« Last post by Dieter on April 11, 2024, 09:04:06 AM »


This is my point exactly, this will not allow others to participate reading or commenting. Stating using a translator on a forum English is used is pretty rude.


You saw that winking smily at the end of my sentence?
Do I really need to explain this smily's statement to you?

Just for your information: All my statements here come 1:1 from the Google Translater, otherwise I couldn't write anything here at all, because my English is too bad for that.

And that's all there is to it from my side.


Dieter
58
and another thing, is it possible to make these steps parallel?

Code: [Select]
2024-04-11 09:27:18 saved group #259/384: 182.68 MB cubes, 259.858 MB/s, 39.0955 compressed MB - i.e. 21% compression
2024-04-11 09:27:18 2 cameras done in 13.596 s
2024-04-11 09:27:18 loading 2 cameras...
2024-04-11 09:27:19 generating cubes...
2024-04-11 09:27:30 total: 30438815 samples, 12176217 image cubes, 12.8872 avg level
2024-04-11 09:27:31 saving 10985790 merged group cubes ~36%...
2024-04-11 09:27:31 saved group #260/384: 167.63 MB cubes, 261.106 MB/s, 34.4178 compressed MB - i.e. 21% compression
2024-04-11 09:27:31 2 cameras done in 12.859 s
2024-04-11 09:27:31 loading 2 cameras...
2024-04-11 09:27:32 generating cubes...
2024-04-11 09:27:40 total: 13249102 samples, 5170086 image cubes, 12.6206 avg level
2024-04-11 09:27:40 saving 4776529 merged group cubes ~36%...
2024-04-11 09:27:41 saved group #261/384: 72.884 MB cubes, 114.598 MB/s, 14.7257 compressed MB - i.e. 20% compression
2024-04-11 09:27:41 2 cameras done in 9.416 s
2024-04-11 09:27:41 loading 2 cameras...
2024-04-11 09:27:42 generating cubes...
2024-04-11 09:27:49 total: 12203757 samples, 5281054 image cubes, 13.1447 avg level
2024-04-11 09:27:49 saving 4356566 merged group cubes ~36%...
2024-04-11 09:27:49 saved group #262/384: 66.4759 MB cubes, 226.109 MB/s, 15.0156 compressed MB - i.e. 23% compression
2024-04-11 09:27:49 2 cameras done in 8.577 s
2024-04-11 09:27:49 loading 2 cameras...
2024-04-11 09:27:50 generating cubes...
2024-04-11 09:27:59 total: 23498082 samples, 9017552 image cubes, 12.5799 avg level
2024-04-11 09:28:00 saving 7712995 merged group cubes ~33%...
2024-04-11 09:28:01 saved group #263/384: 117.691 MB cubes, 248.818 MB/s, 24.9984 compressed MB - i.e. 21% compression
2024-04-11 09:28:01 2 cameras done in 11.161 s
2024-04-11 09:28:01 loading 2 cameras...
2024-04-11 09:28:02 generating cubes...
2024-04-11 09:28:12 total: 19114522 samples, 7384658 image cubes, 12.678 avg level
2024-04-11 09:28:12 saving 7384658 merged group cubes ~39%...
2024-04-11 09:28:13 saved group #264/384: 112.681 MB cubes, 148.264 MB/s, 21.9049 compressed MB - i.e. 19% compression
2024-04-11 09:28:13 2 cameras done in 12.321 s
2024-04-11 09:28:13 loading 2 cameras...
2024-04-11 09:28:14 generating cubes...
2024-04-11 09:28:26 total: 30670067 samples, 12238388 image cubes, 12.6715 avg level
2024-04-11 09:28:27 saving 11891791 merged group cubes ~39%...

each step only process 2 camera, and each step only use one core to process. when building large models, this also waste too much time.
59
General / Re: PPP and shifting camera position or output position with RTK m300
« Last post by dpitman on April 11, 2024, 01:59:58 AM »

The Emlid workflow geotags the photos, but that wipes the yaw, pitch, roll.
 I assume that I will want this in order to best use Agisoft's photo alignment?

In that case I can pull the latitude and longitude from the Emlid Studio events.pos file into a csv.
The only problem here is there is no associated filename, I can write a little script that pulls the exif data from the original photos and then match them via timestamp.
I am worried this is getting a bit complicated and I missed a simpler solution.

Yes.  Metashape uses those parameters.  Emlid is aware of this and are actively working on Emlid Studio to retain all of that data.  Until then, you need to employ a workaround like you suggested.  You could also use RedToolbox (https://www.redcatch.at/REDtoolbox/) in the meantime. It has a nominal fee, and a fully functioning trial.


I have two more questions..
1.  Can I do this when I have already built the ortho and update?  It doesn't seem likely, but just checking which step I need to go back to

If you use the method to write the original exif, including the corrected camera positions, to the images. Then they will be used when ingested by MS.  If you use the original images, and then supply a camera position reference file, yes, you want to do all of that and get everything position wise sorted before having MS process any products.  MS can update the sparse cloud (tie points) based upon updating your changes.  But products beyond that need to be re-done.
60
General / Re: PPP and shifting camera position or output position with RTK m300
« Last post by Jake on April 11, 2024, 01:13:18 AM »
Hi Paulo and Dave,

Thanks for the helpful replies.

Yes, I am logging for at least 5 hours before PPP.
Unfortunately we never have time to visit the site days ahead to survey, so we will be post processing the camera locations.

The Emlid workflow geotags the photos, but that wipes the yaw, pitch, roll.
 I assume that I will want this in order to best use Agisoft's photo alignment?
In that case I can pull the latitude and longitude from the Emlid Studio events.pos file into a csv.
The only problem here is there is no associated filename, I can write a little script that pulls the exif data from the original photos and then match them via timestamp.
I am worried this is getting a bit complicated and I missed a simpler solution.

I have two more questions..
1.  Can I do this when I have already built the ortho and update?  It doesn't seem likely, but just checking which step I need to go back to
2.  This seems to be a good solution for the  DJI P1 flights where I have the DJI .pos file, but we are also flying a Micasense MX dual.  There is no .pos file here, has anyone found a solution for this case?

Thanks very much again for the time

Jake
Pages: 1 ... 4 5 [6] 7 8 ... 10