Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Thibaud Capra

Pages: [1]
1
General / Ground filtering algorithm: where does it start?
« on: May 23, 2017, 03:42:59 PM »
Hello,
I've been using the ground filtering algorithm with great success so far but I was wondering how accurate it was. I ran a few tests using sample data and creating confusion matrices, comparing the results with state-of-the-art algorithms to assess his performance. So far so good.

As a student doing his masters thesis, I'd love to know a bit more about how it works. I understand, as a commercial licence, that I won't get much, but it's worth a try!

In the User Manual, it is written:
Quote
Max angle (deg)
Determines one of the conditions to be checked while testing a point as a ground one, i.e. sets limitation for an angle between terrain model and the line to connect the point in question with a point from a ground class. For nearly flat terrain it is recommended to use default value of 15 deg for the parameter. It is reasonable to set a higher value, if the terrain contains steep slopes.
Max distance (m)
Determines one of the conditions to be checked while testing a point as a ground one, i.e. sets limitation for a distance between the point in question and terrain model. In fact, this parameter determines the assumption for the maximum variation of the ground elevation at a time.
Cell size (m)
Determines the size of the cells for point cloud to be divided into as a preparatory step in ground points classification procedure. Cell size should be indicated with respect to the size of the largest area within the scene that does not contain any ground points, e. g. building or close forest.

  • Where is the algorithm starting? Most algorithms assume the lowest point is a ground point and start from here. Is it the same case here?
  • I guess it deals with outliers using a combination of two criterion: Max Angle & Max Distance. But what if the lowest local point happens to be an outlier? Do you run some kind of SOR filter prior to the lowest point picking?
Any answer is appreciated!

2
General / About "Build Orthomosaic"
« on: May 04, 2017, 06:25:43 PM »
Hello everyone,

I'm using orthomosaics to texture external DEMs but I have an issue: some details seem to be kept when I'm using the "Mosaic Blending" option. If I use the "Average Blending" one, these details are blended in and are hidden, but I have weird color disparities.

Either way, to have a clean result, I need to either photoshop these tiny details out or correct the color saturation of my image.

Is there any way to have details about how each mode works?

Thanks.

3
Python and Java API / Process chunk depending on bounding box's size
« on: April 26, 2017, 03:41:27 PM »
Hello everyone

I'm currently working on UAV-based models where I process subparts of a global point cloud. Problem is, if the bounding box is too small, I may have issues processing it (DPC, Meshing etc.) because my photo coverage is insufficient.

Since the scene's coverage is constant (one photo every X meters), I determined the minimal surface for the bounding box to have enough photos in it to process things properly : 500 m².

I figured my condition would be something like
(I also loosen the region size until the bounding box is big enough):

Code: [Select]
for chunk in doc.chunks
    region = chunk.region
    box_x = chunk.region.size.x
    box_y = chunk.region.size.y
    box_z = chunk.region.size.z
    Box_surf = box_x * box_y
    while Box_surf < 500: #m²
        region.size = 1.1 * region.size
        chunk.region = region
        box_x = chunk.region.size.x
        box_y = chunk.region.size.y
        box_z = chunk.region.size.z
        Box_surf = box_x * box_y
    # Densify
    # Mesh etc.

How can I get the size in meters?
Any way to make it cleaner?

Thanks in advance.

4
Hello,

I'm working with heavy point clouds (~300 million points) where I need to locate areas of interest to then work on these tiny areas.
Here's my workflow:
  • Align Photos
  • Densify Cloud
  • Duplicate chunk
  • Define AOI on dense cloud
  • Mesh
  • Texture
  • Export
The sparse cloud is way too sparse to locate the AOIs that's why I'm working on the dense cloud. As I'm working on an average 40 AOI per cloud, I have a Python script going through steps 1/2/3 then letting me do step 4 before running another script for steps 5/6/7.

Problem is, when running my script successfully, I have no dense cloud in my chunks. I checked the console, and it says it created 307.000.000 points. However, my chunks, even though being created, only contain the sparse clouds (1.117.000 points) !

Is it because of hardware issues?
Current workstation:
  • CPU: Intel Xeon E5 2650 v4
  • GPU: nVidia Quadro M4000
  • RAM: 64Gb
RAM's far from being overflowed: 13Gb are used... Maybe the GPU being unable to render it?
If it is a hardware issue, I'll change my workflow accordingly. If it isn't, where does it come from?

Thanks in advance.

5
Hello,

I'm trying to assess PhotoScan's classification tool for academic purposes.
I'd like to export the dense cloud in a TXT file, while being classified as follows:

X    Y    Z    T

Where T is the type of point. It should be 0 if the point is classified as a Ground Point, 1 elsewise.

Any piece of code would help me!
Thanks.

6
General / ISPRS ground filtering algorithm tests for PhotoScan
« on: March 13, 2017, 12:31:15 PM »
Hello,
I'd like to put the ground filtering algorithm included in PhotoScan to the test by running it on the ISPRS dataset. (See Sithole & Vosselman, 2004)

I have the 15 datasets at my disposal and would like to run the tests and compare its results to some other ground filtering algorithm to assess its relevance in my data processing and production line.

If these tests have already been ran by someone I'd love to hear from them!

Elsewise, my files are in .TXT format, any idea how I could run these in PhotoScan?

Here's the .TXT general format:

X        Y        Z        Type

Type is 0 or 1, translating into the given point belonging to ground or not.

Any help is appreciated.

7
Hello,
I'm currently willing to be exporting shapes in a different coordinate system than my project's.
My photos are in WGS 84 so are my dense clouds etc. but I can export them in Lambert 93 / CC 48 without any issue. The only exception is the shapefiles I have, is there any way to do it?

Some coworker suggested me to use ogr2ogr from GDAL but I don't really understand how.

Any help is appreciated.

8
Python and Java API / Mesh creation based on specific dense cloud classes
« on: February 27, 2017, 11:50:15 AM »
Hello,
I'm trying to mesh my dense cloud after classifying it, and only mesh the points within the "ground" class.

According to what I found on the forums, here's the list of existing classes.

Code: [Select]
Created = 0,
Unclassified = 1,
Ground = 2,
LowVegetation = 3,
MediumVegetation = 4,
HighVegetation = 5,
Building = 6,
LowPoint = 7,
ModelKeyPoint = 8,
Water = 9,
OverlapPoints = 12

My code is the following:
Code: [Select]
import os, PhotoScan

doc = PhotoScan.app.document
chunk = doc.addChunk()
type_chunk = ['Green']

for chunk in doc.chunks:
    # Build Dense Cloud
    chunk.buildDenseCloud(quality=PhotoScan.MediumQuality,
    filter=PhotoScan.AggressiveFiltering)
   
    # Classify Point Cloud or not depending on chunk type
    if not any(type_chunk in chunk.label for type_chunk in type_chunk):
        doc.chunk.dense_cloud.classifyGroundPoints(max_angle = 15.0,
        max_distance = 0.3, cell_size = 1.0)
    else:
        continue
    print("""--------------------- DPC OK ----------------""")
   
    # Build Mesh, then Smooth Mesh depending on chunk type
    if not any(type_chunk in chunk.label for type_chunk in type_chunk):
        chunk.buildModel(surface = PhotoScan.HeightField,
        interpolation = PhotoScan.EnabledInterpolation,
        face_count = PhotoScan.MediumFaceCount, classes = 2)
        chunk.smoothModel(passes = 50)
    else:
        chunk.buildModel(surface = PhotoScan.HeightField,
        interpolation = PhotoScan.EnabledInterpolation,
        face_count = PhotoScan.MediumFaceCount)
        chunk.smoothModel(passes = 100)

I get the following error when running it:

Code: [Select]
2017-02-27 09:35:28   File "C:/PFE_CAPRA/Scripts/170227/AOI_Process_PartII.py", line 84, in <module>
2017-02-27 09:35:28     face_count = PhotoScan.MediumFaceCount, classes = 2)
2017-02-27 09:35:28 TypeError: classes should be a list of int

Any help is appreciated.

Best regards.

9
Python and Java API / Classifying point cloud according to chunk.label
« on: February 24, 2017, 12:31:17 PM »
Hello,
I'm trying to refine a script I've been working on to automate even more parts. This time, depending on how the chunk has been automatically labeled previously, I want to filter or not my point cloud.

Here's my code (Python API 1.2.0), currently not working:

Code: [Select]
import os
import PhotoScan

print(""">>> Initialisation du script <<<""")

doc = PhotoScan.app.document
chunk = doc.addChunk()
type_chunk = ['Green']

for chunk in doc.chunks:
    # Build Dense Cloud
    chunk.buildDenseCloud(quality=PhotoScan.MediumQuality,
    filter=PhotoScan.AggressiveFiltering)
   
    # Classify Point Cloud depending on chunk type
    if not any(type_chunk in chunk.label for type_chunk in type_chunk):
        PhotoScan.DenseCloud.classifyGroundPoints(max_angle = 15.0,
        max_distance = 0.3, cell_size = 1.0)
    else:
        continue
    print("""--------------------- DPC OK ----------------""")
   
    # Build Mesh, then Smooth Mesh depending on chunk type
    if not any(type_chunk in chunk.label for type_chunk in type_chunk):
        chunk.buildModel(surface = PhotoScan.HeightField,
        interpolation = PhotoScan.EnabledInterpolation,
        face_count = PhotoScan.MediumFaceCount, classes = 1)
        chunk.smoothModel(passes = 50)
    else:
        chunk.buildModel(surface = PhotoScan.HeightField,
        interpolation = PhotoScan.EnabledInterpolation,
        face_count = PhotoScan.MediumFaceCount)
        chunk.smoothModel(passes = 100)

Error message:
TypeError: descriptor "classifyGroundPoints' of 'PhotoScan.DenseCloud' object needs an argument

Also, when building the model, i assumed the correct integer for ground points was 1. What's the correct one?

Best regards

10
Python and Java API / Get bounding box coordinates in a specific system
« on: February 21, 2017, 05:08:29 PM »
Hello everyone,
I'm trying to export the bounding box coordinates but I can't figure out a way of having them in one of the two following systems:

- RGF 93 / CC48 (EPSG::3948)
- WGS 84 (EPSG::4326)

For now I only have local coordinates...
Any help would be appreciated!

Current code:

Code: [Select]
# Determining bounding box size
import os
import PhotoScan
doc = PhotoScan.app.document
chunk = doc.addChunk()

path_export = PhotoScan.app.getExistingDirectory("""Spécifiez le dossier
 contenant les exports en fin de traitement""")
path_export += "\\"

box_list = list()
for chunk in doc.chunks:
    box_x = chunk.region.size.x
    box_y = chunk.region.size.y
    box_z = chunk.region.size.z
    box_list.append(chunk.label + " X=" + str(box_x) + " Y=" + str(box_y) + " Z=" + str(box_z))
print (box_list)

path_box = path_export + "Coordonnees_Bounding_Box.txt"
liste_boite = open(path_box, "w")
liste_boite.write(str(box_list))
liste_boite.close()

print(""">>> Coordonnées des boîtes englobantes exportées <<<""")

Result:
Code: [Select]
['Trou_18-Approche X=1.5205588006950554 Y=1.221272970580935 Z=4.5992786085513995', 'Trou_18-Green X=0.8813550129144896 Y=0.3206924763887408 Z=0.10798004251558357', 'Chunk 1 X=0.0 Y=0.0 Z=0.0', 'Chunk 2 X=0.0 Y=0.0 Z=0.0']
NB: I'm using PhotoScan Pro 1.2.6 build 2834 / Python API Release 1.2.0 seems to be working when using other scripts.

Best regards

11
Python and Java API / Export textures only
« on: February 17, 2017, 06:07:41 PM »
Hello,
I'm currently trying to export textures from my model, the "hard" way as I did not find a way of doing it smoothly: I'm extracting the model then I have my texture.

Is there any smoother way to do it? Maybe directly, and I missed it?

Current code:

Code: [Select]
import os
import sys
import PhotoScan
doc = PhotoScan.app.document
chunk = PhotoScan.Chunk()
chunk.label = "New Chunk"
doc.chunks.add(chunk)
nbTrous = int(sys.argv[1])
path_photos = PhotoScan.app.getExistingDirectory("""Spécifiez le dossier
    contenant les photos : """)
path_photos += "/"
path_export = PhotoScan.app.getExistingDirectory("""Spécifiez le dossier
    contenant les exports en fin de traitement : """)
path_export += "/"
for chunk in doc.chunks
    # Export Texture
    exportModel(path_export, binary = False, texture_format = "jpg",
        texture = True, normals = False, colors = False, cameras = False,
strip_extensions = False, format = "obj", projection = "EPSG::3948")

12
Hello,
I'm working on a Python pipeline to process large amounts of photos (1500-2000 per project) that is subdivided in two parts: the first part allows the user to add photos and define the number of chunks needed for the second part.

Since my images were taken with our UAV, some are a bit blurry in the set. I'd like to be able to estimate image quality and remove from the photo alignment images with a score below a user-defined threshold.

So far, I have the following code:

Code: [Select]
import os
import PhotoScan

print(">>> Initialisation du script <<<")

doc = PhotoScan.app.document
chunk = PhotoScan.Chunk()
chunk.label = "New Chunk"
doc.chunks.add(chunk)

path_photos = PhotoScan.app.getExistingDirectory("Spécifiez le dossier contenant les photos : ")
path_photos += "/"

# Vérification du chemin d'enregistrement
project_path = PhotoScan.app.getSaveFileName("Spécifiez le nom du projet à enregistrer : ")
if not project_path:
print("Annulation du script : pas de chemin d'enregistrement fourni")
return 0

if project_path[-4:].lower() != ".psz":
project_path += ".psz"

# Détermination du nombre de trous/chunks
nbTrous = input("Saisissez le nombre de trous dans le golf :")
print(nbTrous + "chunks seront créés, à l'issue de ce script, il faudra détourer chaque trou.")

# Ajout de photos
image_list = os.listdir(path_photos)
for photo in image_list:
if ("jpg" or "jpeg" or "tif" or "png") in photo.lower():
chunk.photos.add(path_photos + photo)
PhotoScan.app.update()
doc.save(project_path)
print("- Photos ajoutées")

# Estimation de la qualité des photos
chunk.estimateImageQuality()

# Alignement des photos
chunk.matchPhotos(accuracy = "medium", preselection = "reference", filter_mask = False, point_limit = 20000)
chunk.alignPhotos()
doc.save(project_path)
print("-- Photos alignées")

# Création de chunks pour chaque trou
chunkIter = 1
while chunkIter <= nbTrous:
chunk.copy()
chunk.label("Trou_" + chunkIter)
chunkIter += 1
print("--- " + nbTrous + " chunks créés.")

# Sauvegarde finale
doc.save(project_path)
print("Projet enregistré, vous pouvez maintenant adapter les Bounding Box de chaque chunk.")

print(">>> Script terminé <<<")

Don't mind the French comments, it's mostly there to be user-friendly as is it meant to be used by French people.
Here the interesting part is this one:
Code: [Select]
# Estimation de la qualité des photos
chunk.estimateImageQuality()

So estimateImageQuality returns a floating number, but I honestly don't know how to exploit it!
I'd like any photo with a quality score under 0.5 to be removed from the photo alignment process I'm running right after.
Any help would be appreciated!

Best regards

13
Feature Requests / Use shapes / KML files to fit bounding box size
« on: February 14, 2017, 03:51:29 PM »
Hi,
As an upcoming feature for PhotoScan, the possibility of fitting the bounding box to a shape (in X, Y of course) would be very interesting to speed up processes.

For instance, when processing UAV-acquired photos, even with a planned flight, some areas are not interesting. Using KMLs, it would be a very fast and convenient way to process the parts that actually present some kind of interest to us.

Best regards

14
General / Using KML files as bouding boxes
« on: February 09, 2017, 11:53:00 AM »
Hello everyone,
I am currently working on a project for my internship where I've been given multiple tasks to do.
My main goal is to produce various files (OBJ, orthomosaics mostly) from UAV images in the most automated way possible. I'm currently using Python to do it, but we'll leave it for now, my concern being focused on another issue :

How can I use KML files to create areas of interest for PhotoScan to densify in?

Let me explain a bit more: I work with a large number of photos acquired with an UAV (~1400 photos) and I have to cover large areas (golfs). One of my goals is to have two different outputs with 2 different levels of detail (LoD) : the first one should be light, with a global model of the whole golf course, the second one should be very detailed, clean and accurately filtered, for the greens' details. The main idea behind this two LoD is to optimize the computing time as I don't need a very accurate definition of the whole golf, but something very accurate on every green.

In order not to build a very dense point cloud (DPC) of the whole golf, I'd like to create two different clouds, one for the whole golf where I'd build a DPC with medium/low quality and another one for each green where I'd build the DPC in high/ultra high quality then filter it.

I have at my disposal KML files delimiting every single green area in my georeferenced cloud.
I would like to use them to isolate my greens in the sparse cloud and create new files containing the greens only to build DPC with a different LoD.

Remember that the main goal here is to maximize the automation of the process.

TL;DR: How can I use my KML files to export parts of my sparse point cloud in different projects to create two different clouds with two different LoD?

Thanks at lot, any help is greatly appreciated!

EDIT: Just had a talk with some coworkers, would it be easier to fit the bounding box to each KML's bounding box and export it one by one?

Pages: [1]