Community Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Alexey Pasumansky

Pages: [1] 2 3 ... 579
Python Scripting / Re: BuildModel params network
« on: Today at 08:17:19 PM »
Hello magic,

This code works for me in 1.4 version:

Code: [Select]
import PhotoScan

client = PhotoScan.NetworkClient()
doc =

task = PhotoScan.NetworkTask()
for c in doc.chunks:
#    task.chunks.append(c.key )
    task.frames.append((c.key, 0)) = "BuildModel"
#task.params['apply'] = task.chunks
task.params['cameras'] = task.frames
#task.params['faces'] = "high"
#task.params['classes'] = [PhotoScan.PointClass.Ground]
task.params['surface'] = PhotoScan.Arbitrary
task.params['interpolation'] = PhotoScan.EnabledInterpolation
task.params['face_count'] = 1000
task.params['source'] = PhotoScan.DataSource.DenseCloudData
task.params['network_distribute'] = True #fine level task distribution key

path = "F:/TEST_PHOTOSCAN/toto2.psx" #relative path from root folder
client.connect('') #server ip
batch_id = client.createBatch(path, [task])


I've commented a couple of unnecessary lines.

Can you check, if it works on your side as well.

Python Scripting / Re: Limiting Poly count
« on: Today at 08:10:27 PM »
Hello DaveRig,

When mesh is generated in Arbitrary mode from the dense cloud at first it is reconstructed in the highest possible resolution (with "unlimited", i.e. maximal possible number of faces) and then decimated to the user-defined value if it is lower than current face count.

So if the default (max) face count is lower than custom value, decimation just would be skipped.

General / Re: two camera process
« on: Today at 08:06:41 PM »
Hello mapping_1,

Can you please specify, how you are checking the alignment results and slave offset estimation? Do you enable Fit Location in the Slave Offset tab?

General / Re: Help with network processing performance
« on: Today at 03:25:21 PM »
Hello Srolin,

The gpu_mask value is the integer representation of a binary mask.

For example, if you have 2 GPUs and wish to have both of them enabled, then the binary mask will be 11 (each bit applies to a separate GPU), the integer representation of "11" is 3.
So for enabling all N out of N graphic cards you can use the following formula to calculate gpu_mask value:
Code: [Select]
gpu_mask = (2 ^ N)  - 1
For three GPUs, it should be 7, so it's correct, if you are using it for GTX 1080 system. You can also check the log on the node to see, if all cards are initialized in the beginning of GPU-supported stage.

General / Re: Batch process > Run script
« on: Today at 03:21:18 PM »
Hello bigben,

Run Script operation is applied to the document, actually. So if the script should have effect on several chunks, you need to modify the script accordingly.

Hello nopaixx,

I think you can use something like the following to get the number of aligned cameras in the chunk:
Code: [Select]
def numaligned(chunk):
    num_aligned = 0
    for camera in chunk.cameras:
        if camera.transform:
            num_aligned += 1
    num_not_aligned = len(chunk.cameras) - num_aligned
    return num_aligned

General / Re: Split in chunks & Strict volume masking
« on: July 19, 2018, 10:29:55 PM »
Hello FredR,

Can you post the screenshots of the smaller chunks separately, how the area next to the "seamline" look like for them?

Попробовал перестроить плотное облако на основе текущего выравнивания, также получил шумы вокруг. Похоже, что это следствие отсутствия достаточного количества особенностей текстуры для фона, на котором расположен объект съёмки, - фон плохо текстурирован, а также бликует, что приводит к проблемам как нахождения валидных соответствий, так и построения карт глубины.
Обратите внимание, что для построения карт глубины между парами кадров требуется определённое количество валидных соответствий между ними. Я попробовал перестроить разреженное облако и расширить область реконструкции - это дало результат получше, но всё равно, конечно, шумный из-за отсутствия текстурных особенностей.

Hello Ian,

Maybe there was some incorrect assignment in your previous tries (I suspect something on the client's side) that affect the class/methods definition themselves.

And another simple example that may be helpful for those who want also to estimate the time left for the current operation based on the spent time and current progress value:

Code: [Select]
import time, PhotoScan
def progress_print(p):
        elapsed = float(time.time() - start_time)
        if p:
            sec = elapsed / p * 100
print('Current task progress: {:.2f}%, estimated time left: {:.0f} seconds'.format(p, sec))
print('Current task progress: {:.2f}%, estimated time left: unknown'.format(p)) #if 0% progress

chunk =
global start_time
start_time = time.time()
chunk.matchPhotos( progress=progress_print)

Здравствуйте, Владимир,

На GoogleDrive отсутствует файл из структуры проекта, можете, пожалуйста, его выложить или прислать ссылку отдельно?

Hello Ian,

I cannot reproduce this issue in PhotoScan Pro 1.4.3 (should be the same in 1.4.2).

Does it happen if you open a new PhotoScan Pro instance and just input these two lines to the Console?
Code: [Select]
proj = PhotoScan.OrthoProjection() = PhotoScan.CoordinateSystem("EPSG::4326")

Hello akoumis,

I think that (T.mulv( - pt.coord[:3])).norm() should give you the proper result.

Hello Ian,

Have you tried this one?
Code: [Select]
proj = PhotoScan.OrthoProjection() = PhotoScan.CoordinateSystem("EPSG::4326")

Hello Ian,

It seems that you are doing something wrong.

The parameters (params) of the NetworkTask should be defined either via .encode method of PhotoScan.Task() or using dictionary assignment:
Code: [Select]
proj = PhotoScan.OrthoProjection()
network_task.params["projection"] = proj

Also note that default parameters are not passed when the task is sent to the server.

Pages: [1] 2 3 ... 579