Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - jrp

Pages: [1] 2
1
Network processing Error: Can't remove folder: parent folder mismatch

I’ve just updated everything to 2.1.0 and was excited to get network processing back up and running (having disbanded the setup during covid). Setup is as follows:

Clients run windows.
Clients have basic bat script on the desktop so that users can run underused clients as processing nodes.
We are running a pair of new linux servers as processing nodes, though we used to have an old linux server running as a processing node too sucsessfully.
The “server” is the same physical machine as the file server, which is a linux machine with samba and zfs hosting the data.
(there is a quirk in the current setup in that as part of the new server deployment process, windows machines are temporarily connecting into an old linux server with samba, and nfs is mapping the shared folders on the actual new server.)

Prior to the new server installs, I had this all working fine, and I have a fairly good understanding of how it’s supposed to work.

I connected everything, ran a windows client using it’s bat script, created a new project using the same machine, then tested network processing.

Image alignment worked great. Then I did model from depth maps, it almost finished then it repeatedly gave:
failed #2 BuildModel.cleanup (1/1): Can't remove folder: parent folder mismatch

This was running on one of the linux nodes. When I used node priorities to shift the job to the windows node, it completed fine soon after.

It’s clear that the paths are set (at least mostly) right as the linux nodes did most of the processing just fine.

Any ideas on what’s going on here?

Console output on the node is (repeated every few seconds forever):

2024-01-05 15:18:34 processing failed in 0.211594 sec
2024-01-05 15:18:35 BuildModel.cleanup (1/1): subtask = cleanup, surface_type = Arbitrary, interpolation = Enabled, face_count = High, source_data = Depth maps, vertex_colors = on, volumetric_masks = off, blocks_crs = Local Coordinates (m), working_folder = /home/saceimg/photogrammetry/test2/test/test1a1.files/0/0/model.tmp
2024-01-05 15:18:35 Peak memory used: 4.80 GB at 2024-01-05 15:15:48
2024-01-05 15:18:35 deleting all temporary files...
2024-01-05 15:18:35 Error: Can't remove folder: parent folder mismatch

I suspected a permissions issue as there are different users in use, but I tried waiting until it crashed, then using a linux node, logged in as the correct user, deleted all of the data in the folder mentioned, and no permissions showed. Interestingly with the model.tmp folder deleted, it continued with the same error message, but was fine when the job was passed back to windows.

Thanks,
JRP

2
General / Server and processing node as systemd services in linux?
« on: March 07, 2022, 05:34:51 PM »
Hi,

I found this article on how to run software as sevices in linux, this looks like exactly the sort of thing that I'm keen to do with both my server and processing node machines. I could go through the instructions listed on that site myself, but I don't know exactly how the software behaves, or what the security implications are for this in detail.

https://www.shellhacks.com/systemd-service-file-example/

I havn't seen any advice for running metashape in any way other than running it manually at the shell prompt, which works just fine, but requires things like having to manually reset things after a power outage or other system restart.

Has anyone else experimented with this (or other methods), and does anyone have any advice on how to implement this?

Thanks,
JRP


3
General / Log4Shell issues?
« on: December 15, 2021, 06:21:07 PM »
Hi,

Could I ask that someone from Agisoft give an official statement confirming whether or not any Agisoft products are affected by the Log4Shell vulnerability.

I need to gather this information for an institutional security review, and I felt it was worth asking here incase it helps anyone else in the same posiiton.

I'm not expecting any issues, but I only use the local network infrastructure not the cloud infrastucture, so I'm not sure what it runs on.

Also, appologies if this is already up here somewhere, I can't find it if it is.

Thanks.

4
General / Linux processing and graphics cards/drivers
« on: October 20, 2021, 04:31:18 PM »
Hi,

Planning to build a Linux server for network processing, likely headless, and I’ve realised I’m a little out of date with graphics card driver matters, questions:

Any recommendations on graphics cards? The usual goto seems to be Nvidia RTX 3070 (or a pair of them), but I’m not sure how well they play on linux these days.

Open source or proprietary drivers?

I know that the driver and config situation for graphics cards is not totally obvious for Metashape (for example using 2 cards without SLI is a thing etc)

Thanks,
JRP


5
General / Metashape crashing intermittently during saves
« on: March 26, 2021, 04:23:46 PM »
Metashape crashing intermittently during saves, When it crashes, the whole application disappears, losing all work since last save, if multiple projects are open, they all close. It’s happening once or twice a week at the moment, causing a lot of lost processing time.

If windows is restarted it works again, but if windows is not restarted, attempting to reopen metashape gives message:

“Windows cannot access the spedified device, path or file. You may not have the appropriate permission to access the item.”

The windows user account is not running in admin mode (this is a regulatory requirement for our organisation, we can briefly access admin mode for testing or genuine admin tasks, but it’s time limited). But we’ve never had the issue before, and other machines have worked just fine in this configuration for a long time.

It usually, but not exclusively happens when saving during batch processing runs, we do a lot of batch processing though, so it’s possible it’s “during saving” that it happens. We estimate around 10% of batch runs end in a crash.

First assumption was a hard disk fault, we switched to saving on the OS ssd for a bit and it kept happening.

We have updated to metashape 1.7.2 (12070) by doing an uninstall and reinstall; but the issue is still happening.

The issue started at roughly the same time as python (3.8 ) was installed onto the machine. I suspect this may be involved, but I’m not sure. I’m also not sure what to do about it if it is.

Thank you in advance for any assistance or suggestions.

6
General / Thermals on 10900k
« on: March 19, 2021, 10:11:08 PM »
Hi,

I have a brand new custom built machine with an i9-10900k, and RTX2070.

I’ve been checking the thermals on it, and I’m a little worried something is wrong. Either this generation just runs hot with Metashape or I have a problem with my cooling solution.

Within a few seconds of entering the estimating camera locations part of processing, (and some other bits) the CPU temperature pegs at 100, stays between 90-100, and it starts thermal throttling averaging aroung 4.5GHZ.

The performance is just fine, but I’m not sure if I’m damaging the thing.

I generally start to get nervous when I see temp stats going over 65, this one is making me very nervous indeed.

Thanks,
JRP

7
General / Windows reinstall, licencing
« on: March 18, 2021, 04:00:27 PM »
I have a windows machine with an Agisoft Metashape node locked licence on it. I need to reinstall windows on the machine. Should I deactivate the licence, then reactivate later, or should I leave it activated?

I’m worried that if I deactivate it, the licence will not reinstall back onto the same machine. I’m also worried that if I don’t deactivate it, it will disappear if the disk has to be wiped.

Apologies for starting a new topic for something that is likely already documented somewhere, but I’ve found contradictory reports in the past about how this works.

8
General / Hardware optimisation and memory channels
« on: October 21, 2020, 02:55:20 PM »
I’m specifying a workstation around Metashape’s capabilities, and have a question.

Puget systems recently tested CPUs, and found that the intel 10900k outperforms the 10900x by a significant margin. (https://www.pugetsystems.com/labs/articles/Agisoft-Metashape-Performance-Intel-Core-10th-Gen-vs-AMD-Ryzen-3rd-Gen-1765/)

This is great as the associated hardware is cheaper because the 10900k uses more mainstream motherboards/chipsets, a different socket and is generally more accessible.

The 10900k has 2 memory channels, the 10900x has 4. I had always assumed that memory channels and memory bandwidth would be such a fundamental requirement for performance in Metatshape that I had never previously considered any of the consumer hardware before. I did a full double take when I saw this data.

My workflow is small object museum work, where we are sometimes having to do 600+ 24MP images per object due to a variety of reasons. A machine with 64GB of ram once ran out of memory on the largest models we’ve done.

My question is: are Puget’s findings general, or are they specific to type of testing they perform? The large project (park map) is 792 images at 18MP (so actually pretty close) – but it’s such a different dataset to ours, I’m not sure it applies directly.

I’m torn between buying a 10900x with 4 memory channels as my intuition says, or buy a 10900k with 2 memory channels like the data suggests.

Have I missed something here?

Thanks,
JRP

9
General / Is faster networking hardware helpful?
« on: December 16, 2019, 08:23:25 PM »
For network processing, with a reasonably fast file server, and several fast nodes, is upgrading everything to 10 gigabit ethernet worthwhile?

I'm not sure if this is a bottleneck or not.

Is there any way to quantify how much speedup would likely be gained?

Thanks,
JRP

10
General / Failing to understand coordinate spaces
« on: December 03, 2019, 04:34:25 PM »
Has someone done a write up on how all of the coordinate spaces and transformation matrixes work in metashape?

There is a specific problem below, but I really want the general answer so that I can solve these problems myself as I keep running into these.

I have a model of a room, it came out about ~45 degrees off level. I clearly have no way to get world coordinates mapped onto it, but on the assumption that the floor is level I rotated the bounding box so that the red side is down. I then output an orthomosaic, on "top XY", the orthomosiac was still off level by 45 degrees. Logic told me that as the red face of the bounding box is used for the bottom of a height field, it has to define the "bottom" for other things.

Next I attempted to fix this by adding some fake GCPs. I added 3 points on the floor, measured the distances between them, did a little trig in excel to produce plausible set of local meter coordinates for 3 points at Z height zero for the 3 points, entered the numbers then clicked the "update" button in reference, nothing improved.

Please advise.

11
General / Cannot find platform offscreen
« on: May 14, 2019, 09:53:35 PM »
Hi all,

Further to a recent discussion about headless cluster issues I had, I've got a new problem I get the following when running on a linux Ubuntu 18.04.2 machine. Machine is modestly customised for our institution's environment, but mostly to do with login processes and file sharing.

Code: [Select]
jrp@server:~/agisoft2/metashape-pro$ ./metashape.sh --node -platform offscreen
This application failed to start because it could not find or load the Qt platform plugin "offscreen"
in "".

Available platform plugins are: offscreen, xcb.

Reinstalling the application may fix this problem.
Aborted (core dumped)


I do not yet have a licence installed on the machine, I'm waiting until I see something that looks promising (like it getting far enough to tell me it doensn't have a licence) before doing that as licences are expensive and I don't want to lose one.

If I replace "node" with "server" it runs in server mode just fine. Adding references to IP addressses etc seems to do nothing useful. Removing node completely gives the same result.

I think version 1.5.0 and 1.5.2 which counts as more than a reinstall.

Any sugestions?

12
General / "can't load OpenCL library"
« on: April 09, 2019, 02:58:36 PM »
I regularly get the message "can't load OpenCL library" in the log, in red letters. I have get this message across many versions across more than one computer. I spent quite a while problem solving this before discovering that openCL isn't relevent to NVidia graphics cards which I use (as far as I can work out), and the message appears to be meaningless in my situation.

Is this a configuration problem at my end? is this an actual problem that I need to fix? or should the message read "you don't have an OpenCL devices"

Everything seems to perform just fine otherwise.

Please advise.

13
General / Linux headless server
« on: January 30, 2019, 07:57:45 PM »
Hi.

I’m trying to set up a server on a headless linux box, according to the documentation I’ve read the command should be:

./metashape.sh --server --control 1.2.3.4 --dispatch 1.2.3.4 --platform offscreen

(IP redacted)
It comes up with the following error message though:

“error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory”

I’m sure I’ve read that running a headless linux server is entirely fine, but I’m not getting anywhere with this.

Machine is running Ubuntu 14.04.5 LTS – I’m aware this is fairly old, but I’m not expecting that ot be the issue.

Any advice?

I’m also interested in running a headless processing node (on a diferent server) but I’ve not even started on that yet.

14
General / Graphics card options
« on: November 29, 2018, 02:09:00 PM »
In a high end graphics workstation, being built specifcally for photoscan (64GB ram, 9920x), I'm trying to work out which graphics card to include. My workload is likelly to be objects (rather than landscapes) and there will be image sets well into the high hundres of images processed sometimes -- I intend to push the capabilities of the the machine as far as I can. 

Options (within budget) are:

GTX 1080 ti -- has 11GB of memory, this is the current default.
RTX 2080 -- similarish performance to above, lower cuda core count, and lower memory.
2x GTX 1070 -- still within budget (just), worse upgrade options as we only really have space for 2 cards.

My question is -- for raw processing speed on large data sets, which of these options will give the best performace?


Secondary questions: is the 11GB of memory in the 1080 ti a significant benifit to the process? I have a theory, that I'm unable to test, that larger datasets (which none of the benchmarks cover perticularly well) will benifit from this, where the smaller datasets won't.

We know from Pugit System's testing that a second graphics card gives a 20% - 25% boost to performance under thier test conditions, how is this likelly to manifest in my situation?

15
General / Photoscan per user settings
« on: November 07, 2018, 10:44:40 PM »
Hi,

I'm on a university campus where each user has thier own login details to the windows domain. I have discovered to my dismay that most of (all of?) the settings in photoscan are per user settings.

Few of these settings are important, but one of them is the tickbox that tells it to use the GPU for processing, this means that for months every user except me has been running with no GPU assistance.

I may be rolling out network processing soon, and I expect there will be many settings that are important for that.

Is there a way to fix this for all users?

Thanks,
JRP

Pages: [1] 2