Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - aneverman

Pages: [1]
1
General / Units for gradual selection values
« on: September 21, 2016, 05:03:02 AM »
Hi all,

Can anyone tell me what the units are for the values used in the gradual selection tool? As in, are these values errors in pixel distances, standard deviations, etc.

Thanks

2
General / Re: No Dense Cloud after batch processing
« on: August 11, 2016, 03:47:18 AM »
Hi Alexey,

No I do not have a log unfortunately. I am now creating a log though for a rerun of the model.

I simply had a batch process to create:

1) Dense cloud: Ultra High Quality, Aggressive depth filtering.
2) Generate a report

I also ticked the box to save after each step.

I have had this issue on three different computers so I tend to not use batch processing. But after my workstation suffered two power cuts in a week recently I have started using batch processing to make sure the project saves in case I don't get to my machine between the processing completing and any technical issues that may occur and cause me to lose the model.

Is there anyway I can save the processing as it runs to try to speed up reruns if the PC crashes or the model doesn't work? Would reuse depthmaps provide any help here?

3
General / No Dense Cloud after batch processing
« on: August 10, 2016, 06:42:55 AM »
Hi guys,

I have a recurring issue when I try to run batch processes. I had a batch process setup to build a dense cloud and then generate a report, and save the project after each step. The report has been generated but is just a blank page. I have had this issue before so usually don't use batch processing.

Has anyone experienced this, or have any ideas how I start troubleshooting this?

Thanks

4
If you are working on topography datasets shadows can introduce elevation error in your DEMs, and results are improved by masking shadow areas.

5
Camera Calibration / Using lens model with GCP alignment
« on: March 16, 2016, 02:17:52 AM »
Hi all,

I am using PhotoScan for SfM models. I have 12 GCPs in the scene which have been surveyed using a total station. I have been adding the lens calibration file from AgiSoft Lens to the project prior to photo alignment. I then add the GCPs after sparse cloud generation, update the georeferencing and optimise the cameras.

My question is: is it with importing the lens model or do my GCP registrations essentially "over-ride" the lens calibration loaded from a file? Or does the lens model help with initial alignment and then get over-written?

Thanks,

Andrew

6
General / Re: Can I change the camera file path
« on: December 09, 2015, 10:47:42 PM »
Awesome, thanks stihl  ;D

7
General / Can I change the camera file path
« on: December 07, 2015, 11:15:46 PM »
Hi all,

I would like to be able to move my projects between computers (i.e. from a processing workstation to my office pc or home pc). To do so I need to change the path for the photo locations. Is there a way to do so?

I realise it may be easier to set the projects up on an external hard drive, but I have quite a few models on my office pc that I want to move to our new workstation. I also want to save the projects to a backup location and may need to change paths if anything goes wrong.

Cheers

8
General / Photo Alignment with GCPs
« on: November 23, 2015, 10:10:35 PM »
Hi All,

I am having trouble with the alignment of some of my models after I add GCPs to all of the photos. I have 12 GCPs surveyed in using a Total Station. The GCP coordinates are correct, and the GCPs plot correctly in ArcMAP. The GCPs have between 52 and 221 projections. When I first aligned the photos with known GCPs the sparse point cloud appeared to generate quite well. After adding the GCPs, one of the GCPs (no. 7) in the sparse cloud sits way out to the left of the other GCPs. I also have a couple of GCPs which relate to the Total Station setup position and backsights, which have not been identified in any of the photos, but were ticked in the Markers Pane. Could these be causing the alignment issue due to them not  having any projections?

Thanks

9
General / Batch processing
« on: September 20, 2015, 10:25:42 AM »
We are looking at getting into a project which will use ~3200 photos, at 24 MP. Our computer does not have enough RAM to process these in one chunk as we wish to use ultra high quality Height-field models. The reason we need such quality is because we are modeling very small scale processes (modeling stone movement).

So to process these we can either go to High quality, or process in chunks?

My questions are:

1) Is there a significant difference between high quality and ultra-high quality? Do these produce different levels of accuracy or just different densities?

2) If I process in chunks, will I get spurious points around the edge of each chunk, and therefore problems when I merge the subsequent point clouds later on? Or will the chunks merge back together seamlessly?

If it makes a difference, I will be using ground control points measured with rtk-dGPS or Total Station.

Thanks in advance

10
General / Re: RAM requirements
« on: September 20, 2015, 10:13:58 AM »
Thanks Alexey

11
General / RAM requirements
« on: September 20, 2015, 05:18:17 AM »
Hi all,

Just a quick question regarding RAM. The PS memory requirements pdf says that for 2000 photos at 12 MP you will need 128 GB of RAM. If I have 24 MP photos, does this requirement double or quadruple?

Thanks

12
General / Re: Another workstation build question
« on: August 26, 2015, 01:20:56 AM »
Thanks dtmcnamara, those are some great insights.


13
General / Another workstation build question
« on: August 25, 2015, 12:05:02 AM »
Hi all,

I've read through a lot of the workstation build posts, but seeing as technology changes so quickly I just wanted to check the info was still applicable.

I've been assigned the task of building a dedicated workstation for my university to run PS projects and R statistics calculations.

At this stage we will be using PS to build high accuracy georeferenced DEMs from ~3200 24 MP photos. We are really hoping to process these in one chunk, as we aren't sure what errors will be introduced if we split the photos into chunks, such as random edge effects, and errors with merging the clouds. I don't think we will be building a mesh from the point cloud in PS, we will probably be processing the point cloud in CloudCompare, or ArcGIS. But I guess we can split the point cloud to build meshes and textures if we wanted to for display purposes?

Our budget is around 3000 USD. It really needs to be a workstation rather than a server for IT management reasons.

So my questions are:

1) Is it worth running two intel CPUs if I want to use GPGPU processing for the point cloud building, or am I better to spend the money on two GPUs to speed up the depth map building/point cloud generation? Am I better to go with 4 core 4 Ghz CPUs over 6 core 3.3 Ghz?

2) Can PS utilize two GPUs?

3) Does GPU memory size make much difference? aka am I better to go with lower memory CAD cards for the same price as gaming grade card? I had read that PS can't really make use of the CAD cards due to only using single precision calculations, but that doesn't mean anything to me!

I was looking at the radeon cards, as I read they are better for OpenCL processing, and was looking at the R9 390 or the HD7970, for around the same price. How do you choose between them?

4)Does RAM speed make a lot of difference? Following the spec sheet, we need around 128 GB of RAM to process the number of photos we are looking at, possibly even more. Given our limited budget, we are really looking at DDR3 ram. But would DDR4 be a significant advantage? Would DDR3 2133 be a worthwhile advantage over DDR3 1600?

5) Does a RAID 0 HDD cluster speed things up, or are all the speed gains in the processing? Would I be better off just using a high RPM HDD?


Thanks in advance for any answers!

Pages: [1]