Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Echostorm

Pages: [1] 2
1
Bug Reports / Re: Assertion "239123219310121" failed at line 1425
« on: October 08, 2024, 10:56:08 PM »
Thanks Alexey, will test

2
Bug Reports / Re: Assertion "239123219310121" failed at line 1425
« on: October 06, 2024, 02:57:33 AM »
Unfortunately it wasnt deleting images that caused this issue. I have created a test set. The only thing done after the intial align is we ran a grad filter on image count of 4 and still same error.

I am about to redo tests with align straight to dense construction.

Only other thing I think it could be is the use of a multi camera array so maybe the images that are sequential (named with date capture) being imported at seperate times is causing this. Anyway I will continue testing.

3
Bug Reports / Assertion "239123219310121" failed at line 1425
« on: October 06, 2024, 12:27:17 AM »
I had this happen to several models. Even in latest 2.1.3. Fails on Depth Maps step during point cloud creation. Looking into the error I see the "Cameras have null Neighbours." After several days of trial and error, I believe I solved the problem and post here to save others the headache it cost me and hopefully have the Agisoft team implement a bug fix.

In summary, we process large datasets and noticed several images of poor alignment. We went through and deleted these images that were causing issues (mainly images of just water). We also deleted the points with grad filters but cant remember if we deleted images in this step so still may be a problem.

Tests that failed to resolve the problem
Quality: Low or High
Uniform sampling: Yes and No Still failed
Point spacing (m): 0.1 and 1m still failed
Disabled Depth filtering still failed
Calculate point colors: Yes and No still failed
Calculate point confidence: Yes yes and no still failed
Reuse depth maps: Yes and No still failed
Replace default point cloud: No and Yes still failed

My theory is the model still contains tie points linked to deleted or removed images despite them being removed and is why it is bring thing null Neighbour error. The reason I think this because if I look at the camera IDs the images with null neighbours are the ones I had removed.

Resetting the alignment didn't solve the issue. It is like they are lingering there. I am running test to confirm this theory by creating a new chunk without any removed images to test, instead of just deleting the problem images entirely. Here is the full log.

Code: [Select]
ARB_texture_non_power_of_two: Yes
ARB_vertex_buffer_object: Yes
CPU: 13th Gen Intel(R) Core(TM) i9-13900KS
ErrorCode: 239123219310121
OpenGLMaxTextureSize: 32768
OpenGLRenderer: NVIDIA GeForce RTX 4090/PCIe/SSE2
OpenGLStereo: No
OpenGLVendor: NVIDIA Corporation
OpenGLVersion: 4.6.0 NVIDIA 536.23
ProductName: Metashape Pro
ServerURL: https://www.agisoft.com/crash/submit.php
System: Windows 64bit
SystemMemory: 127.8 GB
Vendor: Agisoft
Version: 2.1.1.17695
XLog: 6
loaded depth map partition in 0.007 sec
already partitioned (49<=50 ref cameras, 49<=200 neighb cameras)
group 1/1: preparing 98 cameras images...
tie points loaded in 0.016 s
Found 1 GPUs in 0 sec (CUDA: 0 sec, OpenCL: 0 sec)
Using device: NVIDIA GeForce RTX 4090, 128 compute units, free memory: 22826/24563 MB, compute capability 8.9
  driver/runtime CUDA: 12020/10010
  max work group size 1024
  max work item sizes [1024, 1024, 64]
Using device: CPU 13th Gen Intel(R) Core(TM) i9-13900KS (using 27/32)
group 1/1: cameras images prepared in 27.588 s
group 1/1: 98 x frame
group 1/1: 98 x uint8
group 1/1: expected peak VRAM usage: 1686 MB (676 MB max alloc, 7296x10944 mipmap texture, 16 max neighbors)
Found 1 GPUs in 0 sec (CUDA: 0 sec, OpenCL: 0 sec)
Using device: NVIDIA GeForce RTX 4090, 128 compute units, free memory: 22861/24563 MB, compute capability 8.9
  driver/runtime CUDA: 12020/10010
  max work group size 1024
  max work item sizes [1024, 1024, 64]
Using device 'NVIDIA GeForce RTX 4090' in concurrent. (2 times)
Using device: CPU 13th Gen Intel(R) Core(TM) i9-13900KS (using 26/32)
Camera 8456 skipped (no neighbors)
[CPU 3] group 1/1: estimating depth map for 1/48 camera 542 (16 neighbs)...
Finished processing in 36.651 sec (exit code 0)
Error: Assertion "239123219310121" failed at line 1425
checking for missing images...Checking for missing images...
 done in 0.007 sec
Finished processing in 0.007 sec (exit code 1)
BuildPointCloud: downscale = 2, filter_mode = Disabled, reuse_depth = off, source_data = Depth maps, point_colors = off
Generating depth maps...
Preparing 1192 cameras info...
21 cameras skipped (due to <=2 tie points)
1171/1192 cameras prepaired
cameras data loaded in 2.561 s
cameras graph built in 0.302 s
filtering neighbors with too low common points, threshold=50...
Camera 8456 has no neighbors
Camera 12990 has no neighbors
Camera 8551 has no neighbors
Camera 8553 has no neighbors
Camera 8552 has no neighbors
Camera 8568 has no neighbors
Camera 8569 has no neighbors
Camera 8570 has no neighbors
Camera 8567 has no neighbors
Camera 8549 has no neighbors
Camera 12989 has no neighbors
Camera 8550 has no neighbors
avg neighbors before -> after filtering: 32.0487 -> 20.8138 (35% filtered out)
limiting neighbors to 16 best...
avg neighbors before -> after filtering: 20.8138 -> 15.0176 (28% filtered out)
neighbors number min/1%/10%/median/90%/99%/max: 0, 0, 13, median=16, 16, 16, 16
cameras info prepared in 5.543 s
saved cameras info in 0.373
Partitioning 1171 cameras...
number of mini clusters: 24
24 groups: avg_ref=48.7917 avg_neighb=60.625 total_io=224%
max_ref=50 max_neighb=147 max_total=196
cameras partitioned in 0.01 s
saved depth map partition in 0.014 sec
loaded cameras info in 0.615
loaded depth map partition in 0.016 sec
already partitioned (49<=50 ref cameras, 49<=200 neighb cameras)
group 1/1: preparing 98 cameras images...
tie points loaded in 0.017 s
Found 1 GPUs in 0 sec (CUDA: 0 sec, OpenCL: 0 sec)
Using device: NVIDIA GeForce RTX 4090, 128 compute units, free memory: 22861/24563 MB, compute capability 8.9
  driver/runtime CUDA: 12020/10010
  max work group size 1024
  max work item sizes [1024, 1024, 64]
Using device: CPU 13th Gen Intel(R) Core(TM) i9-13900KS (using 27/32)
group 1/1: cameras images prepared in 25.958 s
group 1/1: 98 x frame
group 1/1: 98 x uint8
group 1/1: expected peak VRAM usage: 1686 MB (676 MB max alloc, 7296x10944 mipmap texture, 16 max neighbors)
Found 1 GPUs in 0 sec (CUDA: 0 sec, OpenCL: 0 sec)
Using device: NVIDIA GeForce RTX 4090, 128 compute units, free memory: 22885/24563 MB, compute capability 8.9
  driver/runtime CUDA: 12020/10010
  max work group size 1024
  max work item sizes [1024, 1024, 64]
Using device 'NVIDIA GeForce RTX 4090' in concurrent. (2 times)
Using device: CPU 13th Gen Intel(R) Core(TM) i9-13900KS (using 26/32)
Camera 8456 skipped (no neighbors)
[CPU 3] group 1/1: estimating depth map for 1/48 camera 542 (16 neighbs)...
Finished processing in 34.57 sec (exit code 0)
Error: Assertion "239123219310121" failed at line 1425
Sparse point cloud not found. Please generate the sparse cloud first.
Sparse point cloud not found. Aligning photos to generate the sparse cloud...
Sparse point cloud not found. Aligning photos to generate the sparse cloud...
Tie point cloud already exists. Proceeding with camera neighbor check...
['AdaptiveOrthophotoMapping', 'AggressiveFiltering', 'Antenna', 'Application', 'Arbitrary', 'AttachedGeometry', 'AverageBlending', 'BBox', 'BlendingMode', 'BridgeDeck', 'Building', 'Calibration', 'CalibrationFormat', 'CalibrationFormatAustralis', 'CalibrationFormatAustralisV7', 'CalibrationFormatCalCam', 'CalibrationFormatCalibCam', 'CalibrationFormatGrid', 'CalibrationFormatInpho', 'CalibrationFormatOpenCV', 'CalibrationFormatPhotoModeler', 'CalibrationFormatPhotomod', 'CalibrationFormatPix4D', 'CalibrationFormatSTMap', 'CalibrationFormatUSGS', 'CalibrationFormatXML', 'Camera', 'CameraGroup', 'CameraMapping', 'CameraTrack', 'CamerasFormat', 'CamerasFormatABC', 'CamerasFormatAeroSys', 'CamerasFormatBINGO', 'CamerasFormatBlocksExchange', 'CamerasFormatBoujou', 'CamerasFormatBundler', 'CamerasFormatCHAN', 'CamerasFormatFBX', 'CamerasFormatInpho', 'CamerasFormatMA', 'CamerasFormatNVM', 'CamerasFormatOPK', 'CamerasFormatORIMA', 'CamerasFormatPATB', 'CamerasFormatRZML', 'CamerasFormatSummit', 'CamerasFormatVisionMap', 'CamerasFormatXML', 'Car', 'Chunk', 'ChunkTransform', 'CirTransform', 'CircularTarget', 'CircularTarget12bit', 'CircularTarget14bit', 'CircularTarget16bit', 'CircularTarget20bit', 'ClassificationMethod', 'CloudClient', 'Component', 'CoordinateSystem', 'Created', 'CrossTarget', 'CustomFaceCount', 'CustomFrameStep', 'DataSource', 'DataType', 'DataType16f', 'DataType16i', 'DataType16u', 'DataType32f', 'DataType32i', 'DataType32u', 'DataType64f', 'DataType64i', 'DataType64u', 'DataType8i', 'DataType8u', 'DataTypeUndefined', 'DepthMap', 'DepthMaps', 'DepthMapsAndLaserScansData', 'DepthMapsData', 'DisabledBlending', 'DisabledInterpolation', 'Document', 'Elevation', 'ElevationData', 'EnabledInterpolation', 'EqualIntervalsClassification', 'EulerAngles', 'EulerAnglesANK', 'EulerAnglesOPK', 'EulerAnglesPOK', 'EulerAnglesYPR', 'Extrapolated', 'FaceCount', 'FilterMode', 'FlatLayout', 'FrameStep', 'GenericMapping', 'GenericPreselection', 'Geometry', 'Ground', 'HeightField', 'HighFaceCount', 'HighNoise', 'HighVegetation', 'Image', 'ImageCompression', 'ImageFormat', 'ImageFormatARA', 'ImageFormatASCII', 'ImageFormatBIL', 'ImageFormatBMP', 'ImageFormatBZ2', 'ImageFormatCR2', 'ImageFormatDDS', 'ImageFormatEXR', 'ImageFormatJP2', 'ImageFormatJPEG', 'ImageFormatJXL', 'ImageFormatKTX', 'ImageFormatNone', 'ImageFormatPNG', 'ImageFormatPNM', 'ImageFormatSEQ', 'ImageFormatSGI', 'ImageFormatTGA', 'ImageFormatTIFF', 'ImageFormatWebP', 'ImageFormatXYZ', 'ImageLayout', 'ImagesData', 'Interpolation', 'JenksNaturalBreaksClassification', 'LargeFrameStep', 'LaserScansData', 'License', 'LowFaceCount', 'LowPoint', 'LowVegetation', 'Manmade', 'MappingMode', 'Marker', 'MarkerGroup', 'Mask', 'MaskOperation', 'MaskOperationDifference', 'MaskOperationIntersection', 'MaskOperationReplacement', 'MaskOperationUnion', 'MaskingMode', 'MaskingModeAlpha', 'MaskingModeBackground', 'MaskingModeFile', 'MaskingModeModel', 'Masks', 'Matrix', 'MaxBlending', 'MediumFaceCount', 'MediumFrameStep', 'MediumVegetation', 'MetaData', 'MildFiltering', 'MinBlending', 'Model', 'ModelData', 'ModelFormat', 'ModelFormat3DS', 'ModelFormatABC', 'ModelFormatCOLLADA', 'ModelFormatCTM', 'ModelFormatDXF', 'ModelFormatDXF_3DF', 'ModelFormatFBX', 'ModelFormatGLTF', 'ModelFormatKMZ', 'ModelFormatLandXML', 'ModelFormatNone', 'ModelFormatOBJ', 'ModelFormatOSGB', 'ModelFormatOSGT', 'ModelFormatPDF', 'ModelFormatPLY', 'ModelFormatSTL', 'ModelFormatTLS', 'ModelFormatU3D', 'ModelFormatVRML', 'ModelFormatX3D', 'ModelGroup', 'ModelKeyPoint', 'ModerateFiltering', 'MosaicBlending', 'MultiframeLayout', 'MultiplaneLayout', 'NetworkClient', 'NetworkTask', 'NoFiltering', 'NoPreselection', 'OrthoProjection', 'Orthomosaic', 'OrthomosaicData', 'OrthophotoMapping', 'OverlapPoints', 'Photo', 'PointClass', 'PointCloud', 'PointCloudData', 'PointCloudFormat', 'PointCloudFormatCL3', 'PointCloudFormatCOPC', 'PointCloudFormatCesium', 'PointCloudFormatDXF', 'PointCloudFormatE57', 'PointCloudFormatExpe', 'PointCloudFormatLAS', 'PointCloudFormatLAZ', 'PointCloudFormatNone', 'PointCloudFormatOBJ', 'PointCloudFormatOC3', 'PointCloudFormatPCD', 'PointCloudFormatPDF', 'PointCloudFormatPLY', 'PointCloudFormatPTS', 'PointCloudFormatPTX', 'PointCloudFormatPotree', 'PointCloudFormatSLPK', 'PointCloudFormatU3D', 'PointCloudFormatXYZ', 'PointCloudGroup', 'Preselection', 'RPCModel', 'Rail', 'RasterFormat', 'RasterFormatGeoPackage', 'RasterFormatKMZ', 'RasterFormatMBTiles', 'RasterFormatNone', 'RasterFormatTMS', 'RasterFormatTiles', 'RasterFormatWW', 'RasterFormatXYZ', 'RasterTransform', 'RasterTransformNone', 'RasterTransformPalette', 'RasterTransformType', 'RasterTransformValue', 'ReferenceFormat', 'ReferenceFormatAPM', 'ReferenceFormatBramor', 'ReferenceFormatCSV', 'ReferenceFormatMavinci', 'ReferenceFormatNone', 'ReferenceFormatTEL', 'ReferenceFormatXML', 'ReferenceItems', 'ReferenceItemsCameras', 'ReferenceItemsMarkers', 'ReferenceItemsScalebars', 'ReferencePreselection', 'ReferencePreselectionEstimated', 'ReferencePreselectionMode', 'ReferencePreselectionSequential', 'ReferencePreselectionSource', 'Region', 'RoadSurface', 'RotationOrder', 'RotationOrderXYZ', 'RotationOrderXZY', 'RotationOrderYXZ', 'RotationOrderYZX', 'RotationOrderZXY', 'RotationOrderZYX', 'Scalebar', 'ScalebarGroup', 'Sensor', 'Service4DMapper', 'ServiceCesium', 'ServiceMapbox', 'ServiceMelown', 'ServicePicterra', 'ServicePointbox', 'ServicePointscene', 'ServiceSketchfab', 'ServiceType', 'Shape', 'ShapeGroup', 'Shapes', 'ShapesFormat', 'ShapesFormatCSV', 'ShapesFormatDXF', 'ShapesFormatGeoJSON', 'ShapesFormatGeoPackage', 'ShapesFormatKML', 'ShapesFormatNone', 'ShapesFormatSHP', 'Shutter', 'SmallFrameStep', 'SphericalMapping', 'SurfaceType', 'Target', 'TargetType', 'Tasks', 'Thumbnail', 'Thumbnails', 'TiePoints', 'TiePointsData', 'TiledModel', 'TiledModelData', 'TiledModelFormat', 'TiledModelFormat3MX', 'TiledModelFormatCesium', 'TiledModelFormatLOD', 'TiledModelFormatNone', 'TiledModelFormatOSGB', 'TiledModelFormatOSGT', 'TiledModelFormatSLPK', 'TiledModelFormatTLS', 'TiledModelFormatZIP', 'TrajectoryData', 'TrajectoryFormat', 'TrajectoryFormatCSV', 'TrajectoryFormatNone', 'TrajectoryFormatSBET', 'TrajectoryFormatSOL', 'TrajectoryFormatTRJ', 'TransmissionTower', 'Unclassified', 'UndefinedLayout', 'Utils', 'Vector', 'Version', 'Viewpoint', 'Vignetting', 'Water', 'WireConductor', 'WireConnector', 'WireGuard', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'app', 'license', 'utils', 'version']
Attributes of TiePoints.Point object:
['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'coord', 'cov', 'selected', 'track_id', 'valid']
Attributes of TiePoints object:
['Cameras', 'Filter', 'Point', 'Points', 'Projection', 'Projections', 'Track', 'Tracks', '__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__path', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'bands', 'cleanup', 'copy', 'cropSelectedPoints', 'cropSelectedTracks', 'data_type', 'export', 'meta', 'modified', 'pickPoint', 'points', 'projections', 'removeKeypoints', 'removeSelectedPoints', 'removeSelectedTracks', 'renderDepth', 'renderImage', 'renderMask', 'renderNormalMap', 'renderPreview', 'tracks']

['In',
 'Metashape',
 'Out',
 '_',
 '__',
 '___',
 '__builtin__',
 '__builtins__',
 '__doc__',
 '__help_disabled',
 '__loader__',
 '__name__',
 '__package__',
 '__spec__',
 '_dh',
 '_i',
 '_i1',
 '_i10',
 '_i11',
 '_i12',
 '_i13',
 '_i14',
 '_i15',
 '_i16',
 '_i17',
 '_i2',
 '_i3',
 '_i4',
 '_i5',
 '_i6',
 '_i7',
 '_i8',
 '_i9',
 '_ih',
 '_ii',
 '_iii',
 '_oh',
 'camera',
 'camera_tie_point_counts',
 'chunk',
 'doc',
 'exit',
 'get_ipython',
 'help',
 'min_neighbors',
 'min_tie_points',
 'no_neighbors_cameras',
 'point',
 'point_index',
 'projections',
 'quit',
 'tie_point_count',
 'tie_point_projections',
 'track',
 'track_id',
 'tracks']

['In',
 'Metashape',
 'Out',
 '_',
 '_17',
 '__',
 '___',
 '__builtin__',
 '__builtins__',
 '__doc__',
 '__help_disabled',
 '__loader__',
 '__name__',
 '__package__',
 '__spec__',
 '_dh',
 '_i',
 '_i1',
 '_i10',
 '_i11',
 '_i12',
 '_i13',
 '_i14',
 '_i15',
 '_i16',
 '_i17',
 '_i18',
 '_i19',
 '_i2',
 '_i3',
 '_i4',
 '_i5',
 '_i6',
 '_i7',
 '_i8',
 '_i9',
 '_ih',
 '_ii',
 '_iii',
 '_oh',
 'camera',
 'camera_tie_point_counts',
 'chunk',
 'doc',
 'exit',
 'get_ipython',
 'help',
 'min_neighbors',
 'min_tie_points',
 'no_neighbors_cameras',
 'point',
 'point_index',
 'projections',
 'quit',
 'tie_point_count',
 'tie_point_projections',
 'track',
 'track_id',
 'tracks']
Cannot proceed without tie points. Please run Align Photos first.
Cannot proceed without tie points. Please run Align Photos first.
Using 'chunk.tie_points' for tie points.
Tie points found in 'chunk.tie_points'.
Number of tie points: 2261525
Available chunks and their labels:
Using 'chunk.tie_points' for tie points.
Updating tie points...
disabled 1242 points
removed 540703 tracks
Finished processing in 7.286 sec (exit code 1)
checking for missing images...Checking for missing images...
 done in 0.006 sec
Finished processing in 0.006 sec (exit code 1)
BuildPointCloud: downscale = 16, filter_mode = Disabled, reuse_depth = off, source_data = Depth maps, point_colors = on, point_confidence = on
Generating depth maps...
Preparing 1090 cameras info...
cameras data loaded in 0.068 s
cameras graph built in 0.271 s
filtering neighbors with too low common points, threshold=50...
avg neighbors before -> after filtering: 32.5486 -> 21.767 (33% filtered out)
limiting neighbors to 16 best...
avg neighbors before -> after filtering: 21.767 -> 15.5385 (29% filtered out)
neighbors number min/1%/10%/median/90%/99%/max: 7, 10, 14, median=16, 16, 16, 16
cameras info prepared in 1.95 s
saved cameras info in 1.14
Partitioning 1090 cameras...
number of mini clusters: 23
23 groups: avg_ref=47.3913 avg_neighb=58.6957 total_io=224%
max_ref=48 max_neighb=179 max_total=226
cameras partitioned in 0.009 s
saved depth map partition in 0.181 sec
loaded cameras info in 3.554
loaded depth map partition in 0.006 sec
already partitioned (46<=50 ref cameras, 95<=200 neighb cameras)
group 1/1: preparing 141 cameras images...
tie points loaded in 0.014 s
Found 1 GPUs in 0 sec (CUDA: 0 sec, OpenCL: 0 sec)
Using device: NVIDIA GeForce RTX 4090, 128 compute units, free memory: 22861/24563 MB, compute capability 8.9
  driver/runtime CUDA: 12020/10010
  max work group size 1024
  max work item sizes [1024, 1024, 64]
Using device: CPU 13th Gen Intel(R) Core(TM) i9-13900KS (using 27/32)
group 1/1: cameras images prepared in 29.316 s
group 1/1: 141 x frame
group 1/1: 141 x uint8
group 1/1: expected peak VRAM usage: 133 MB (64 MB max alloc, 1456x1638 mipmap texture, 16 max neighbors)
Found 1 GPUs in 0 sec (CUDA: 0 sec, OpenCL: 0 sec)
Using device: NVIDIA GeForce RTX 4090, 128 compute units, free memory: 22865/24563 MB, compute capability 8.9
  driver/runtime CUDA: 12020/10010
  max work group size 1024
  max work item sizes [1024, 1024, 64]
Using device 'NVIDIA GeForce RTX 4090' in concurrent. (2 times)
Using device: CPU 13th Gen Intel(R) Core(TM) i9-13900KS (using 26/32)
[CPU 3] group 1/1: estimating depth map for 1/46 camera 12593 (16 neighbs)...
Finished processing in 37.242 sec (exit code 0)
Error: Assertion "239123219310121" failed at line 1425


Question: Is there a way to clean the model to delete tie points that are no longer associated with any active images, or is a complete project restart necessary? I'm keen to resolve this bug to avoid starting projects from scratch and to help others who might face the same issue.

4
General / Re: How to update point cloud if i remove images ?
« on: March 29, 2024, 05:57:29 AM »
Normally it removes the tie points from the point cloud when you remove the image. Not sure why it is showing up, you can simply right click on the images you want to remove and click reset camera alignment before removing (or disabling) them. For me i usually just disable images I dont want, that way if I have an error or something i can dig up those images later.

5
Yes, I have the same problem as this. I deal with many more photos though. We run corridor surveys with 100,000+ images. We also network process, however 1 node gets stuck for quite some time at 5% burning only CPU. It gets through it eventually however sometimes it can sit there for days while it proceeds. Makes the processing computer that is running on unusable due to CPU load which hovers at ~70% metashape dedication.

6
Feature Requests / Re: Is there a way to change the Basemap displayed?
« on: February 28, 2023, 11:53:28 AM »
Does anyone have the URL to display bing maps? I havent been able to get it working.

Thank you.

7
General / Re: Multicamera alignment issues
« on: June 07, 2020, 03:05:01 AM »
Hi Alexey, I will send across the sample dataset.

8
General / Generate UV texture from point class
« on: May 30, 2020, 02:57:59 PM »
Hi guys,

Working with large city maps with over 250,000,000 polygon models, and putting in trees in other software. Is it possible to assign a UV to a point class that is then assigned to the texture when a mesh with texture is generated? Cheers.

9
General / Re: Multicamera alignment issues
« on: September 17, 2019, 05:58:34 AM »
Thanks Matt,

Yes I have definitely noticed this in the later version, also coming from the Beta days. 

Here is a visualisation of my swath https://imgur.com/eMJv34M

as well as result of my oblique only (Nadir disabled) align... https://imgur.com/a/jioLHsJ as you can see it works, ergo we should meet the side lap requirements, although in the ortho I am getting the odd double up.

10
General / Re: Multicamera alignment issues
« on: September 16, 2019, 03:19:31 AM »
Won't let me post any more photos. See link https://ibb.co/mNbRzRq

It is exceptionally clean data, no pixel movement, no lens blur, optimum lighting conditions, minimal shade movement between runs.

11
General / Re: Multicamera alignment issues
« on: September 16, 2019, 03:17:19 AM »
No luck with the multicamera setting. The few microseconds between captures will likely not let this succeed.

I can't get tie points between Nadir runs when processing with multiple cameras so I cannot get these to display and send to you. 

It is something else and not an overlap issue as I can get it to stick by removing the nadir and only keeping oblique, likewise I can get it to stick (not very accurately) with Nadir only too although at 25% it isn't great). The forward lap is about 95%.

So that makes me think that the Nadir being a different lens type is causing issues. See photo of overlap with all three frames, with POI highlighted for overlap reference.


12
General / Re: Multicamera alignment issues
« on: September 14, 2019, 08:33:31 AM »
What is curious, is the actual oblique and nadir align, its the runs that dont despite the solid side lap.

13
General / Re: Multicamera alignment issues
« on: September 14, 2019, 04:58:10 AM »
I understand what you are saying about only 10% side lap for Nadir not being enough, however techinically it is 70% side lap, as the oblique cameras are still 50mm and they cover the Nadir area on either sides so plenty of side lap, one would assume that these would calculate as cameras at an angle. I do like how you have the sub menu style though. I will give that a go but I am not optimistic. It would be good to be able to allocate a single plane within agisoft for the scan type, i.e. height field vs arbitrary in the align process.

14
General / Re: Multicamera alignment issues
« on: September 14, 2019, 01:10:38 AM »
Note the 50mm lens is different on the Nadir camera, where the port starboard is a different 50mm lens. I have also tried 85mm lens port and starboard with the same bad results.

15
General / Re: Multicamera alignment issues
« on: September 14, 2019, 01:08:00 AM »
Hi Alexey,

Cameras shot simultaneously (well within a few hundred milliseconds).


As far as multi camera setup, I have broken the cameras up in the camera calibration section. However still same issue as if they were all just merged in the one calibration group.


Pages: [1] 2