Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - hanparker

Pages: [1]
1
General / Geometry going wrong in 'samey' areas
« on: September 29, 2015, 01:38:35 PM »
I'm not sure if that's what's happening but here's the situation..

I'm modelling big long stretches of road. I'm using python script to set the settings and batch up the photos, set the reference and camera calibration etc. The models seem pretty good through villages. As I move into big long boring stretches with hedges on both sides and nothing much to differentiate (I'm guess this might be the problem) - the models geometry stops matching the real world..

I was having problems with rotation of my models but then had it pointed out that yaw pitch roll isn't used. So I started providing photos from both sides of the road. If I check the model window before I start Alignment - I can confirm I've got all the photos for both sides of the road - nothing odd or different going on from the models that look pretty good in the built up area. I'm only modelling relatively small chunks at a time with around 600 photos in each. Using 10000 tie points, high accuracy and "reference" pair selection

So my hypothesis is that it's struggling because of lack of interesting features.. Does this sound likely and is there anything I can do to help it out?


2
General / Some rotation in my model..?
« on: September 24, 2015, 06:04:56 PM »
I've spent quite a lot of time getting to grips with Agisoft and have been making some good models. I'm using data from a road based survey. I am supplying the model with x,y,z and yaw pitch and roll. I've also supplied it with camera calibration and told it to fix calibration. I don't use the "optimise alignment" process as I don't want to to change any alignment and I I set tie points to 10,000.

I'm reasonably happy that my input values are correct - when I use these to make a model using alternative software (e.g. a trial license of Acute3d), then the resultant model is perfectly aligned with the LiDAR data.

However, the Agisoft model seems to be suffering some rotation - on one side of the road the mesh is about 20cm higher that the LiDAR data, and on the other side of the road, it's 20cm lower.

I tried using different Yaw, Pitch Roll values as a test and the resultant model was the exactly the same. Does it actually use Yaw Pitch Roll?

3
Python and Java API / Masks and Memory
« on: September 23, 2015, 04:25:13 PM »
I have a chunk with 600 photos. This is made up of 6 sets of 100 photos. Each set of 100 photos has a different directory and a different mask. I have written some Python that gets the correct mask and applies to the photo. This seems to work fine and I check each photo and it has the correct mask - however when I try to save I get an Agisoft message saying "Not enough Memory". This isn't the physical limits of my PC as the task manager says I'm using about 6GB of the 32GB available. What's going on?

Here's the Python in case that is doing something obviously silly:

mask_list = list()
current_path = ""

chunk = doc.chunks[0]
#create a list of the 6 masks
for j in range(len(chunk.cameras)):
      camera = chunk.cameras[j]
      
      if current_path != os.path.dirname(camera.photo.path):         
         current_path = os.path.dirname(camera.photo.path)
         mask_name = os.path.join(os.path.dirname(camera.photo.path),"mask.png")
         print(mask_name)
         m = PhotoScan.Mask()
         m.load(mask_name)
         mask_list.append(m)
      
#apply the 6 masks depending on photo path
for i in range(len(doc.chunks)):
   print(i)
   current_path = ""
   mask_index = -1
   chunk = doc.chunks[ i ]
   for j in range(len(chunk.cameras)):
      camera = chunk.cameras[ j ]
      if current_path != os.path.dirname(camera.photo.path):      
         current_path = os.path.dirname(camera.photo.path)
         mask_index += 1   
      
      camera.mask = mask_list[mask_index]
   
      
print("applying masks finished")

4
Python and Java API / loadReference again
« on: September 21, 2015, 04:20:53 PM »
At the moment I am loading the reference data manually. I'm setting a chunk active, clicking the reference pane and selecting "import".

The reference file is held in a .txt file in csv format.

I tried a script that looks like it should work. The return value of "loadReference" is true - but the X,Y,Z values of the photos never updates if I checked the reference pane.

The script is simply:

print("Script started")

for i in range(len(doc.chunks)-1):      
   chunk = doc.chunks
   checkloaded = chunk.loadReference("path.txt",'csv')
      
   PhotoScan.app.update()
   print(i, checkloaded)


print("Photos imported")

Anyone see where I am going wrong?

5
General / Difference between "Matching Points" and "Selecting Pairs"
« on: September 17, 2015, 08:09:49 PM »
I'm going round in circles a bit at the moment - sorry if I ask stupid questions.

I have an awful lot of data to process. I am running two Professional licenses of Photoscan on AWS - I am connecting via VNC so that I can use the GPU on the remote devices. The remote devices have 4 GPU. I am processing chunks of around 800 photos at a time. The "Align Photos" stage has been taking around 2 hours. I give it the x,y,x pitch, yaw and roll info via a txt import and the camera calibration I set via a python script.

What might cause the "Align Photos" stage to take a lot longer than usual? I've set a job going and it is predicting that the Align photos stage will take 10 hours. The data is stored on a local drive so it's not a network issue.

I have "fixed calibration" unchecked in the "camera calibration" screen for the sensors - does that make any difference?

I am running a similar data set on my home pc (not identical though - photos from a different part of the survey) - this says it is going to take 27 minutes at the stage immediately after the "detecting points" stage of "align photos" - but instead of saying "Matching Points" as the remote PC is saying, it says "Selecting pairs" - is there a magic option that I have selected by accident on my home pc that I've not set on the remote one?

I have previously run successful models on the remote machines in the same sort of timescales as my home one so I know the hardware configuration is up to it and presume I've done something wrong. I've checked the preferences screen and I am using 28 of the 32 cores and all 4 GPU (though I know these are used for the dense point cloud stage rather than this initial stage.)

Any hints and tips will be gratefully received.







6
Python and Java API / Split in chunks - all chunks have all the photos in?
« on: September 17, 2015, 02:36:21 PM »
I have a chunk with 840 photos in. The photos all have x,y,z info and run along a road so you'd think they should split nicely. I run the "Split In Chunks" from the wiki repository and click the custom menu item. I go with the default  2 x 2 square and I get 4 new chunks:  chunk 1\1 through to chunk 2\2 - but they all have the original 840 photos in.

What am I missing?

7
Python and Java API / loadReference column order
« on: September 16, 2015, 01:59:46 PM »
I want to use loadReference (pg 18,19 of the pdf http://downloads.agisoft.ru/pdf/photoscan_python_api_1_1_0.pdf).

Anyone know what the required csv column order is?

8
General / Settings advice required
« on: August 24, 2015, 12:36:38 PM »
Sorry in advance - a beginner's question.

I'm trying to create a model of buildings along a street from mobile mapping data. I can create a good model using Acute3d and its automatic "level of detail" option. I am trying to create the same sample building in a model in Agisoft. I'm looking for advice as to which settings to use when creating the "dense cloud" and the 3d model. I've tried the default - which creates a model very quickly - but it's not very usable (i.e. very low density mesh that isn't very accurate). I've also tried the densest options with "ultra high". This is very slow and produces a very dense mesh that is "bumpy". So can you please advise me about dense cloud settings and 3D model settings in order to produce a medium density polygon mesh but with accurate vertices - I seem to be able to produce low density and low accuracy or extremely high density that is actually too bumpy and doesn't represent reality.

Any advice greatly appreciated.

Pages: [1]