Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - remco.kootstra

Pages: [1]
1
Feature Requests / Re: pointcloud intensity import and export
« on: March 07, 2024, 10:20:34 AM »
Hi,

We have had some updates since DavidD started this topic but I still see that the intensity is displayed as black points.

If I export from Metashape I see that the intensity is still there so I think it's now more a display problem?
Can you provide a solution for displaying the intensity? It would help a lot since the intensity point cloud is in most cases better then the RGB.

Thanks and looking forward to your reply!

Regards,

Remco

2
Hi Alexey,

I just came across this post and it seems the be the solution I'm looking for.
However I can't get the script running (possibly because it is outdated).

We are using Metashape V2.0.1. Can you tell if we need to change the code below?
Code: [Select]
#export format:
#point_ID X Y Z
#[camera_label x_proj y_proj]

import Metashape, math

doc = Metashape.app.document
chunk = doc.chunk
M = chunk.transform.matrix
crs = chunk.crs
point_cloud = chunk.point_cloud
projections = point_cloud.projections
points = point_cloud.points
npoints = len(points)
tracks = point_cloud.tracks

path = Metashape.app.getSaveFileName("Specify export path and filename:", filter ="Text / CSV (*.txt *.csv);;All files (*.*)")
file = open(path, "wt")
print("Script started...")

point_ids = [-1] * len(point_cloud.tracks)
for point_id in range(0, npoints):
point_ids[points[point_id].track_id] = point_id
points_proj = {}

for photo in chunk.cameras:

if not photo.transform:
continue
T = photo.transform.inv()
calib = photo.sensor.calibration

for proj in projections[photo]:
track_id = proj.track_id
point_id = point_ids[track_id]

if point_id < 0:
continue
if not points[point_id].valid: #skipping invalid points
continue

point_index = point_id
if point_index in points_proj.keys():
x, y = proj.coord
points_proj[point_index] = (points_proj[point_index] + "\n" + photo.label + "\t{:.2f}\t{:.2f}".format(x, y))

else:
x, y = proj.coord
points_proj[point_index] = ("\n" + photo.label + "\t{:.2f}\t{:.2f}".format(x, y))

for point_index in range(npoints):

if not points[point_index].valid:
continue

coord = M * points[point_index].coord
coord.size = 3
if chunk.crs:
#coord
X, Y, Z = chunk.crs.project(coord)
else:
X, Y, Z = coord

line = points_proj[point_index]

file.write("{}\t{:.6f}\t{:.6f}\t{:.6f}\t{:s}\n".format(point_index, X, Y, Z, line))

file.close()
print("Finished")

Thanks and looking forward to your reply.

Kind regards,

Remco

3
Bug Reports / Re: Assertion "3478972301441" failed at line 720
« on: February 02, 2023, 09:34:34 AM »
Just a quick update, I have no explanation for why this worked, but! Originally, I had aligned my photos, then used gradual selection to use the 'Reconstruction Uncertainty' function. I had then used the Optimise camera alignment, with all but k4 checked. This is when I tried to create a dense cloud and ran into the above error.

I did an extra 'align photos', then followed with build point cloud, and now it's working! So I'm not sure why but hope it helps someone.

I'm running into the same issue. After allignment I used the gradual selection to remove some of the uncertainties. When I tried to generate a dense cloud I get the same 'Assertion error message'.
I also generated a dense cloud without removing the uncertanties with the gradual selection tool but then there are no problems.

In my opinion something in V2.0 goes wrong after removing some uncertainties with the gradual selection tool.

Regards,

Remco

4
General / Orthomosaic from large aerial dataset issues
« on: November 23, 2022, 12:58:38 PM »
Hi,

I'm having some issues with generating Orthomosaics from large aerial data sets.

Data is from a PhaseOne iXM 100 camera and flown top down and also oblique at 45 degree angles.
The dense mesh and the tiled mesh looks absolutely fantastic.
We also need to generate orthos from this data.

When I go the DEM route the estimated calulation time for one ortho is over 12 days! for around a 4000 image chunk.

So I decided to go the 2.5d Heightmap route and genrated a height mesh from the densecloud and used this as the base for generating the orthomosaic.
Timewise this works very well and the Ortho looks ok, apart from the edges of buildings etc.

I attached some screenshots

If you looks at the otho you can see the jagged edges at the roof edge. This is visible everywhere.
The mesh you see is the generated 2,5d mesh..  as you can see there you also have the jagged edges, so it's logical they are also visible in the otho itself.
Finally there's an image of the DEM which looks much smoother.

Using the DEM for the ortho results in long calculation times.

Does anyone know what the best route is to generate orthos from large aerial data? and getting better results when using the 2,5D mesh?

Thanks in advance.. 

5
First of all, thank you for your answers!

With the tiling script we can manage most of the problems we had.

We are also looking into buying a new Workstation with 8tb of NVME disk space. Thanks for the suggestion flyzk! We also see similar usage of disk space and the 8tb is looking to be sufficient.
We will also add one or two 4090 RTX graphic cards so I tink we will manage to properly process the project.

@PolarNick, we will also try the suggestion of reducing the FaceCount and hopefully this will not change the geometry and accuracy of the model too much.
We will also check the log for the Vulcan message.

Thanks guys!

6
General / Managing Large Datasets (with Phase One 100 Megapixel Imagery(
« on: November 14, 2022, 05:03:31 PM »
Hi Guys,

We are rather new to processing our datasets in Metashape.
The forum provides a lot of information for general workflows and challenges but we couldn't find any information regarding the processing of (really)large projects.

I would therefore like to ask if you can give us some advice on what to do and what not to do when processing large datasets with metashape.
Our data consists of mainly aerial data shot with drones. Our main camera for datacollection is the PhaseOne iXM100, so 100Mpx images.
All our data is flown with a DJI M300 RTK drone.

The average datasets we process are around 6000 images maximum (100Mpx), but our projects are getting bigger and although we do not mind the increase in processing time we do stumble upon some problems processing certain steps.

For instance, we aree currently gathering data to process a digital twin of a large area in the Netherlands.
The area is flown in 9 parts and the total number of 100Mpx images is 40.000.
We will also scan the entire area from the ground with a mobile mapping system.

The entire area is flown top down (so camera facing downward) and oblique at 45degree angle from every side like survey planes do.
To get a workflow going we are currently processing a dataset of a factory ground also shot with the PhaseOne iXM100 under the DJI M300RTK. Total number of images is 17.100
 
We stumbled upon a few challenges along the way but the entire alignment in the end all seem to go well. It’s the mesh/model generation step where we are currently having trouble.
After a day or 6 of processing the calculation is around 7/8th into the calculation the system just freezes every time.

As mentioned, the alignment went well and all images are calculated properly.
The challenge starts when we start to generate the Dense Cloud at medium (or high). The Depth Maps are created but halfway the generation of the Dense Cloud the system freezes (after six days).


We monitored the pc to see if we can find the issue. It seems to have more then enough disc space and does not run out of RAM.
Our specs are
- CPU: AMD Ryzen Threadripper 3970X 32-Core Processor    3.70 GHz
- RAM: 128 GB
- GPU: GFX RTX2080Ti

How can we handle these large projects properly? Is it even doable to process a project this size (and the upcoming one of 40.000 images) in one go? Or should we split the project in the end to make it more manageable?

I found the metashape scripts and see one to split a data set back into chunks. Is this the way to go? And if so, are there any recommendations on how to proceed and take into account? (overlap of the chunks, seamlines of the different chunks, can we generate the final outputs/deliverable as one)

The end product should be a Dense Cloud, Orthomosaic and tiled mesh (like cesium and tiled obj).

Also, since this is a rather large dataset I read that it’s better to use depth maps to generate the mesh so we do not generate a dense cloud in this case. Am I correct?

Thanks and looking forward to your insights!

Regards,

Remco

Pages: [1]