Forum

Author Topic: Cannot achieve multi-threading with concurrent.futures.ThreadPoolExecutor  (Read 4887 times)

dobedobedo

  • Newbie
  • *
  • Posts: 31
    • View Profile
Hi,
I noticed that in the official Python scripts collection (https://github.com/agisoft-llc/metashape-scripts), footprints-to-shapes and automatic_masking use concurrent.futures.ThreadPoolExecutor to achieve multi-threading processing.
However, I use the same method in my script but no matter I run it from either Tool>Run script or from the Python console directly, it seems to always use only one core for processing. Could you please enlighten me what might go wrong?
Below is my code
Code: [Select]
def GetPointMatchList(chunk, *band, collate=True):
    point_cloud = chunk.point_cloud
    points = point_cloud.points
    point_proj = point_cloud.projections
    npoints = len(points)
    camera_matches = dict()
    point_matches = dict()
    for camera in chunk.cameras:
        total = dict()
        point_index = 0
        # If no band number input, only process the master channel
        try:
            proj = point_proj[camera.planes[band[0]]]
        except IndexError:
            proj = point_proj[camera]
        except KeyError:
            # Skip the camera if there is no projection information
            continue

        for cur_point in proj:
            track_id = cur_point.track_id
            # Match the point track ID
            while point_index < npoints and points[point_index].track_id < track_id:
                point_index += 1
            if point_index < npoints and points[point_index].track_id == track_id:
                # Add point matches and save their pixel coordinates to list
                total[point_index] = cur_point.coord
                try:
                    point_matches[point_index][camera.planes[band[0]]] = cur_point.coord
                except KeyError:
                    point_matches[point_index] = dict()
                    try:
                        point_matches[point_index][camera.planes[band[0]]] = cur_point.coord
                    except IndexError:
                        point_matches[point_index][camera] = cur_point.coord
                except IndexError:
                    point_matches[point_index][camera] = cur_point.coord
        try:
            camera_matches[camera.planes[band[0]]] = total
        except IndexError:
            camera_matches[camera] = total

    if collate:
        # Keep tie points which have at least 3 observation in same band if collate is True
        point_to_keep = set()
        new_camera_matches = dict()
        new_point_matches = dict()
        for point_index, value in point_matches.items():
            if len(value) >= 3:
                new_point_matches[point_index] = value
                point_to_keep.add(point_index)
        for camera, points in camera_matches.items():
            new_camera_matches[camera] = {point: coord for point, coord in iter(points.items()) if
                                          point in point_to_keep}

    # camera_matches describes point indice and their projected pixel coordinates for each camera
    # point_matches describes point's pixel coordinates in different cameras for each point
    return camera_matches, point_matches

with concurrent.futures.ThreadPoolExecutor() as exe:
    results = exe.map(lambda _index: GetPointMatchList(chunk, _index),
                      [_index for _index, _band_name in enumerate(bands)])
    for result in results:
        cameras, points = result
        camera_matches = {**camera_matches, **cameras}
        point_matches = {**point_matches, **points}
I also tried to put multiprocessing.cpu_count() as an argument of the executor, but didn't work either.
« Last Edit: April 05, 2022, 05:55:40 PM by dobedobedo »

PolarNick

  • Full Member
  • ***
  • Posts: 103
    • View Profile
Hi, Python has GIL - Global Interpreter Lock, so python code itself always run on a single core at a time.

But footprints_to_shapes.py and automatic_masking.py runs faster because while their python-part code is still runs on a single CPU core at a time - C++-part of code takes major part of their execution time and can be executed on multiple CPU cores.

For example inside of these calls of project/pickPoint and inside of these calls of image processing operations inside of scipy (they are also written in C++) executions moves from Python-part of code to C++-part of code. And so these calls release Python's GIL at their start and makes multi-core execution possible. They still fallbacks to single-core execution after that, because when they returns to Python-part of code - it is required to acquire GIL back again.

dobedobedo

  • Newbie
  • *
  • Posts: 31
    • View Profile
I see. Thanks for the interpretation. Does it mean that I have to use C/C++ tool to achieve multithreading? Just wondering if there are easier ways to achieve multi-threading without changing too many codes?
I also tried dask.delayed before but didn't seem to work either.

Hugh

  • Newbie
  • *
  • Posts: 19
    • View Profile
If it's all pure Python code, you could use the multiprocessing module (https://docs.python.org/3/library/multiprocessing.html), which creates new python processes and passes data between the processes

dobedobedo

  • Newbie
  • *
  • Posts: 31
    • View Profile
If it's all pure Python code, you could use the multiprocessing module (https://docs.python.org/3/library/multiprocessing.html), which creates new python processes and passes data between the processes
Thanks for the suggestion. I remembered I tried it before but it spawned other Metashape application windows with an empty project there and the program stuck there because the data was not passed to other Metashape programs.  Is it the case? Maybe my memory is wrong.

Emanuele1234

  • Newbie
  • *
  • Posts: 14
    • View Profile
Hi, dobedobedo,

Have you solved your problem? I have a similar one and I'd like to see how you solved the situation... I'm really stack...

In case,
thank you in advance.

Best regards.
Emanuele

Jordan Pierce

  • Newbie
  • *
  • Posts: 23
    • View Profile
Re: Cannot achieve multi-threading with concurrent.futures.ThreadPoolExecutor
« Reply #6 on: November 15, 2022, 09:33:31 PM »
If it's all pure Python code, you could use the multiprocessing module (https://docs.python.org/3/library/multiprocessing.html), which creates new python processes and passes data between the processes
Thanks for the suggestion. I remembered I tried it before but it spawned other Metashape application windows with an empty project there and the program stuck there because the data was not passed to other Metashape programs.  Is it the case? Maybe my memory is wrong.

Switch from Pool to ThreadPool and it should work without spawning a bunch of Metashape processes.