Forum

Recent Posts

Pages: [1] 2 3 ... 10
1
General / Re: Agisoft Metashape 2.2.0 pre-release
« Last post by JMR on December 05, 2024, 09:57:26 PM »
As for the orthomosaic clipping by boundary - we are not yet able to reproduce the problem. Do you observe it in any project?
How the original (not clipped) orthomosaic looks like, what is the source image format and bit depth? Any additional information related to the project specifics might be helpful.
Hello Alexey. Believe it or not, now I cannot get the orthomasaic madness to occur again.
the not clipped one is this:

I created a dupicate applying clipping and duplicating orthophotos, and got the one I sent yesterday with the blocky blacked out areas. By doing the very same today, the black thing never comes up. ??? why on earth? so for the moment, you can forget the weird issue.
Thanks for your support,

Geobit

2
Python and Java API / Re: Alignment differences between Linux and Windows
« Last post by ELL on December 05, 2024, 06:18:04 PM »
Hi,

Thanks for your reply Alexey, I sent the files over.
3
Python and Java API / Re: pairs in Metashape.Tasks.MatchPhotos doesn't work
« Last post by JyunPingJhan on December 05, 2024, 05:56:46 PM »
Hi Alexey


Here is code for local processing, it works fine.
Code: [Select]
import Metashape
import math

# 計算兩點之間的平面距離
def calculate_distance(point1, point2):
    """計算兩個點之間的平面距離 (忽略 Z 軸)。"""
    return (point1.x - point2.x) ** 2 + (point1.y - point2.y) ** 2

# 根據平面坐標 (E, N) 自訂建立 pairs
def custom_pairs(chunk):
    """根據相機標籤與距離條件建立自訂 pairs 列表。"""
    cameras = [camera for camera in chunk.cameras if camera.reference.location is not None]  # 確保參考位置存在
    pairs = []

    for i, cam1 in enumerate(cameras):
        loc1 = cam1.reference.location
        for j, cam2 in enumerate(cameras):
            if i >= j:  # 避免重複計算和自配對
                continue
           
            # 獲取相機參考位置
           
            loc2 = cam2.reference.location
            distance = calculate_distance(loc1, loc2)           
           
            # 配對規則
            if cam1.label.endswith("PL") and distance <= 25:
                pairs.append((cam1.key, cam2.key))
            elif cam1.label.endswith("PR") and distance <= 25:
                pairs.append((cam1.key, cam2.key))
            elif cam1.label.endswith("SL"):
                if cam2.label.endswith("PL") and distance <= 25:
                    pairs.append((cam1.key, cam2.key))
                elif cam2.label.endswith("PR") and distance <= 25:
                    pairs.append((cam1.key, cam2.key))
                elif cam2.label.endswith("SL") and distance <= 225:
                    pairs.append((cam1.key, cam2.key))
                elif cam2.label.endswith("SR") and distance <= 225:
                    pairs.append((cam1.key, cam2.key))
            elif cam1.label.endswith("SR"):
                if cam2.label.endswith("PL") and distance <= 25:
                    pairs.append((cam1.key, cam2.key))
                elif cam2.label.endswith("PR") and distance <= 25:
                    pairs.append((cam1.key, cam2.key))
                elif cam2.label.endswith("SL") and distance <= 225:
                    pairs.append((cam1.key, cam2.key))
                elif cam2.label.endswith("SR") and distance <= 225:
                    pairs.append((cam1.key, cam2.key))
                                 
    return pairs

# 主程式
doc = Metashape.app.document
chunk = doc.chunk


# 呼叫自訂配對函數
pairs = custom_pairs(chunk)
print(len(pairs))
print(pairs[0])

chunk.matchPhotos(pairs=pairs, keypoint_limit=5000, tiepoint_limit=5000, generic_preselection=False, reference_preselection=False)

# 執行相機對齊
chunk.alignCameras()



And the following is code for network processing.

Code: [Select]

import Metashape
import tkinter as tk
from tkinter import messagebox
import subprocess
import math

# 計算兩點之間的平面距離
def calculate_distance(point1, point2):
    """計算兩個點之間的平面距離 (忽略 Z 軸)。"""
    return (point1.x - point2.x) ** 2 + (point1.y - point2.y) ** 2

# 根據平面坐標 (E, N) 自訂建立 pairs
def custom_pairs(chunk):
    """根據相機標籤與距離條件建立自訂 pairs 列表。"""
    cameras = [camera for camera in chunk.cameras if camera.reference.location is not None]  # 確保參考位置存在
    pairs = []

    for i, cam1 in enumerate(cameras):
        loc1 = cam1.reference.location
        for j, cam2 in enumerate(cameras):
            if i >= j:  # 避免重複計算和自配對
                continue
           
            # 獲取相機參考位置
            loc2 = cam2.reference.location
            distance = calculate_distance(loc1, loc2)           
           
            # 配對規則
            if "PL" in cam1.label and distance <= 25:
                pairs.append((cam1.key, cam2.key))
            elif "PR" in cam1.label and distance <= 25:
                pairs.append((cam1.key, cam2.key))
            elif "SL" in cam1.label:
                if "PL" in cam2.label and distance <= 25:
                    pairs.append((cam1.key, cam2.key))
                elif "PR" in cam2.label and distance <= 25:
                    pairs.append((cam1.key, cam2.key))
                elif "SL" in cam2.label and distance <= 225:
                    pairs.append((cam1.key, cam2.key))
                elif "SR" in cam2.label and distance <= 225:
                    pairs.append((cam1.key, cam2.key))
            elif "SR" in cam1.label:
                if "PL" in cam2.label and distance <= 25:
                    pairs.append((cam1.key, cam2.key))
                elif "PR" in cam2.label and distance <= 25:
                    pairs.append((cam1.key, cam2.key))
                elif "SL" in cam2.label and distance <= 225:
                    pairs.append((cam1.key, cam2.key))
                elif "SR" in cam2.label and distance <= 225:
                    pairs.append((cam1.key, cam2.key))
                                 
    return pairs


def align_batch():
   
    root = tk.Tk()
    root.withdraw()
    response = messagebox.askquestion("確認", "確認執行Batch Align")
    if response == 'no':
        return

    doc = Metashape.app.document
    doc.save()
    if not doc.chunks:
        Metashape.app.messageBox("No chunks in the project!")
        return

    # 創建 NetworkClient 來與伺服器通信
    network_client = Metashape.NetworkClient()
    server_ip = "140.118.119.121"  # 用實際伺服器的 IP 替換
    network_client.connect(server_ip)

    # 任務列表
    tasks = []
    enabled_chunks = []

    # 遍歷所有啟用的 chunk,為每個 chunk 創建 MatchPhotos 任務
    for chunk in doc.chunks:
        if chunk.enabled:
            enabled_chunks.append(chunk)
           
            pairs = custom_pairs(chunk)  # 呼叫自訂配對函數
            #print(pairs[5])
           
            # 創建 MatchPhotos 任務
            match_photos_task = Metashape.Tasks.MatchPhotos()
            match_photos_task.pairs = pairs
            match_photos_task.keypoint_limit = 5000
            match_photos_task.tiepoint_limit = 5000
            match_photos_task.downscale = 1
            match_photos_task.generic_preselection = False
            match_photos_task.reference_preselection = True
            match_photos_task.filter_mask = True
            match_photos_task.mask_tiepoints = False
            match_photos_task.filter_stationary_points = False
            match_photos_task.guided_matching = False
            match_photos_task.reset_matches = True
           
            # 轉換為 NetworkTask 並加入任務列表
            network_task = match_photos_task.toNetworkTask(chunk)
            tasks.append(network_task)
           
   
   
    ## 創建 AlignCameras 任務,並將其加入任務列表
    align_cameras_task = Metashape.Tasks.AlignCameras()
    align_cameras_task.reset_alignment = True  # 可以設置其他 AlignCameras 參數
    align_cameras_task.adaptive_fitting = False

    network_task_align = align_cameras_task.toNetworkTask(enabled_chunks)
    tasks.append(network_task_align)
       
    ## 使用 createBatch 提交所有任務
    project_path = doc.path  # 取得當前專案的路徑
    batch_id = network_client.createBatch(project_path, tasks)  # 提交批次處理
   
    # 開始執行批次處理
    network_client.setBatchPaused(batch_id, False)
   
    Metashape.app.quit()
    metashape_executable = r"C:\Program Files\Agisoft\Metashape Pro\metashape.exe"  # 請根據實際路徑修改
    subprocess.Popen([metashape_executable, project_path])
   
Metashape.app.addMenuItem("PRIDE/B. Batch Align", align_batch)


4
Python and Java API / Re: pairs in Metashape.Tasks.MatchPhotos doesn't work
« Last post by Alexey Pasumansky on December 05, 2024, 05:46:16 PM »
Hello JyunPingJhan,

Can you please share the updated code that works as expected in local mode? And please confirm whether you are using the same script for Run Script task started in network mode?
5
Python and Java API / Re: Alignment differences between Linux and Windows
« Last post by Alexey Pasumansky on December 05, 2024, 05:42:48 PM »
Hello ELL,

Is it possible to share to support@agisoft.com one project aligned on Linux and two projects with the considerable variation processed on Windows, but all three based on the same dataset?
6
Python and Java API / Re: pairs in Metashape.Tasks.MatchPhotos doesn't work
« Last post by JyunPingJhan on December 05, 2024, 04:22:05 PM »
Hi Alexey

Thanks your replies.

Actually, I have tried pairs.append((cam1.key, cam2.key)) , it don't work on network processing, either 

Meanwhile, if I run with local processing with     pairs.append((i, j))   ,it only detects features on part of images, and varied if I recreate the chunk.

Only pairs.append((cam1.key, cam2.key)) works on chunk.matchPhotos(....)

I have thousands of images and need custom paris to make sure the connection are corrected, but I still can't make it run with network processing.

Hope there are solutions.
7
General / Re: Agisoft Metashape 2.2.0 pre-release
« Last post by Alexey Pasumansky on December 05, 2024, 03:56:53 PM »
Hello José,

The issue with the point cloud export should be fixed in the next 2.2.0 update.

As for the orthomosaic clipping by boundary - we are not yet able to reproduce the problem. Do you observe it in any project?
How the original (not clipped) orthomosaic looks like, what is the source image format and bit depth? Any additional information related to the project specifics might be helpful.
8
Python and Java API / Re: export original colors as rgb with exportRaster
« Last post by Tori on December 05, 2024, 03:55:23 PM »
Hi,

yes, I am refering to the color of the orignial scene, so the first. And from my current understanding exporting an orthomosaic with the same resolution should work, thank you.
Would I export it like this then?

Code: [Select]
path_DEM_color    = os.path.join(path, filenameDEM_color)
chunk.exportRaster(path = path_DEM_color,image_format = Metashape.ImageFormatXYZ,raster_transform = Metashape.RasterTransformNone,source_data =  Metashape.DataSource.OrthomosaicData, resolution_x=0.0005, resolution_y=0.0005)

9
Hi,

Thank you for your reply, I never set the capture distance and the alignment process is still pretty fast compare to other steps.

I will check if I can see any improvements using this option.

Regards,
Vincent
10
Python and Java API / Alignment differences between Linux and Windows
« Last post by ELL on December 05, 2024, 03:38:43 PM »
TL;DR:
Metashape processing shows significant variance differences between Windows and Linux for alignments of frames from videos taken with f2.2 and f1.8 lenses. Linux has minimal variance, while Windows shows high variance in camera errors and f-values, especially for f2.2 videos. Why? How to fix?
 
I am trying to process smartphone videos taken with different lenses with the Metashape Python API (2.1.2), and I noticed significant differences between Windows and Linux. The processing machines are identical apart from the processor, which is one generation newer on the Linux machine (see details below). One batch of videos (Video 0,1,2,3) is taken with an f1.8 lens at 1920x1080 pixels resolution, and the other batch (Videos 5,6,7) is taken with an f2.2 wideangle lense at 4000x3000 pixels resolution. I extract a frame every 0.2 seconds and assign the RTK-enabled geolocation to each frame, then I run the alignment of the photos in Metashape, using the parameters below for each run. I repeat the experiment (addPhotos, matchPhotos, alignCameras) 15 times for each video, so 15 runs per video on Windows, 15 runs per video on Linux. I am investigating the variance inbetween runs regarding camera errors and estimated internal camera parameters. I am letting Metashape estimate the internal camera paramers freely, providing no reference information. The results are intriguing. I posted a graph in the attachment. There is a significant difference between Linux and Windows. On Linux, the videos show no variance in camera errors at all, whereas on Windows, the errors for the f2.2 videos are highly variant. On Windows, the f-value estimate for the f2.2 videos is highly variant, while the f-value for the f1.8 videos is barely variant. On Linux, the f-value shows almost no variance. I am not very knowledgeble about the other paramers, but in general they show a higher variance on Windows than on Linux. Can someone elaborate on why these platform differences would occur, and how I could mitigate them?
 
Alignment Settings:
 
Accuracy: High
Generic preselection: No
Reference preselection: Source
Key point limit: 80,000
Key point limit per Mpx: 1,000
Tie point limit: 5,000
Exclude stationary tie points: Yes
Guided image matching: No
Adaptive camera model fitting: Yes
 
System Information:
 
CPU: Intel(R) Core(TM) i9-14900K (Linux) and Intel(R) Core(TM) i9-13900K (Windows)
Memory: 128GiB
GPU: NVIDIA GeForce RTX 4090
Pages: [1] 2 3 ... 10