Forum

Author Topic: Segmentation fault at the process end  (Read 2375 times)

FrancoCorleone

  • Newbie
  • *
  • Posts: 13
    • View Profile
Segmentation fault at the process end
« on: August 02, 2022, 01:48:37 AM »
Hi again, I opened another ticket, as previous one was only in regards to running in docker.
But this time, I run the processing script on Windows machine. It processes everything. I can see the last log.info in my script is processed. I explicitly call some other operations after metashape part to confirm it runs correctly, and then, at the end, it shows Segmentation Fault. What can be the reason for that?

Looks like this problem is not limited to Docker. It works on my Mac, it works on native Linux, it throws on both Windows and inside Docker.

Greetings

FrancoCorleone

  • Newbie
  • *
  • Posts: 13
    • View Profile
Re: Segmentation fault at the process end
« Reply #1 on: August 02, 2022, 03:02:34 PM »
I did some additional tests with literally commenting out operations from the bottom up, and it seems, that at least for Docker (I don't have currently access to windows machine to test), the segmentation fault comes from this line:

        chunk.buildTexture(blending_mode=Metashape.MosaicBlending,
                           texture_size=4096,
                           texture_type=Metashape.Model.TextureType.DiffuseMap,
                           fill_holes=True,
                           ghosting_filter=True, progress=lambda x: print(f"Progress {x}"))

I added a progress to see what's going on, and weirdly enough this how the end of it looks like:

Progress 84.99994046368285
Progress 85.0
done in 1.35395 sec
Progress 85.0
Progress 100.0

Like... the progress keeps logging after it's done? How's that possible?
I don't know if that's just a coincidence. I have no idea what's going on, but this is super annoying.
I appreciate any help

FrancoCorleone

  • Newbie
  • *
  • Posts: 13
    • View Profile
Re: Segmentation fault at the process end
« Reply #2 on: August 03, 2022, 02:05:11 PM »
I confirmed the same behaviour on EC2 g4dn instance. This has like 32GB of RAM ant T4 Tesla GPU. No hiccups during processing really, I don't think it's RAM related.