Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - sjharwin

Pages: [1]
General / Re: Model export translation
« on: August 17, 2017, 01:52:09 PM »
Or perhaps I just need to not use 'local' in the export dialog and translate all the projects by the same amount?

General / Re: Model export translation
« on: August 17, 2017, 01:43:30 PM »
Thanks for the speedy reply Alexey. Yes, I am using UTM coordinates and the coordinates stayed large when I changed to local (in the interface) so I assumed they would need to be translated. I have a bunch projects that are in UTM that need to connect once exported as OBJ meshes so if I choose local and leave the shift as all zeroes will it choose a different value for the UTM to local shift for each project? How do I get access to the UTM to local shift values to check? I will need to end up with a bunch of OBJs that all share the same local coordinate system (ie they can no longer be in UTM because that messes up the triangles).
Please advise,

General / Model export translation
« on: August 17, 2017, 12:54:13 PM »
When I try to use the functionality to translate the model on export, the resulting model coordinates are unchanged. I am using the latest PS Pro.
See the attached image from the export and then the image showing what CloudCompare sees when I try to load that obj.
The reason I need to do this is because the exported OBJ is unable to deal with size of the coordinates so it seems to truncate the Y coordinates and that messes up the triangles in the mesh.
Am I doing something wrong?


That realign command will also save me a lot of time. Thanks.

I have a related issue. I am doing research into calibration options. When I started I created chunks for different flights and then merged the chunks (so that I could do an initial alignment on a small set prior to locating markers and doing the proper align of the merged set which then became my seed project for all subsequent testing). Now when I look at the camera calibration dialog it shows a camera calibration for each of the original chunks (as though those photos were taken with a different camera). The problem with that is that the derived camera calibrations are based on the original chunks instead of the entire photo set.  I need the calibration to be based on the whole set, ie I need the software to see one camera not multiple. Is there a way to tell the project that the merged chunk is one camera? If not I fear I will need to redo close to 30 projects since my research relies on a single calibration and I mistakenly assumed that the self-calibration would derive the camera model for the camera used not on chunk by chunk (within the merged chunk).

If I cannot tell the project that the merged chunk is a single camera then I assume I need to export all my masks and all my markers, create a new project with a single chunk from the start and then import the masks and markers... correct? It would be nice if I could avoid that since I have so many projects that are effectively the same set of photos and all these would need to be recreated.


General / Re: exported camera settings
« on: June 16, 2014, 05:37:10 AM »

I am also having issues with the pixel size PhotoScan reports versus the pixel size calculated and the pixel size reported by the manufacturer.  It seems PhotoScan is the one getting the wrong answer but I am struggling to find any explanation of how PhotoScan gets the number it wants to use.

Does anyone have any hints in this problem? Should I just overwrite the value PhotoScan has come up with? Is it wrong because of a bug in the software or am I missing something?


General / Re: Tie Point Accuracy setting in Ground Control Settings
« on: February 19, 2014, 04:49:08 AM »
I am with Tom on the need for clarification of the Ground Control Settings - Measurement accuracy settings.

I am assuming a few things about these settings as I cannot find much info in the WIKI or in the help that defines the meaning of these values. My fist assumption is that the accuracy value is 1 sigma. My second assumption (based on reading the forum) is that I should leave the tie point accuracy set to 4 pixels (which seems large as Tom said). My final assumption is that the Projection accuracy value is an indication of how well a well defined feature can be pinpointed in an image and therefore in the model... which I would have thought should similar to the tie point accuracy. In general a human can identify a well defined feature to around 0.4-0.6 of pixel and a computer can do it to around 0.1 and that is why I assume you provide 0.1 as the default.

If you could provide clarification on these settings and how the default values are derived I would be appreciative. My GCPs have been measured with a precise total station survey and therefore I can confidently say they are good to a sigma 1-1.5 mm. In one of the forum posts you suggest that I should therefore set the Marker accuracy to 0... is this really the best approach?


General / Re: Automatic GCP identification
« on: December 03, 2012, 04:36:25 AM »
Thanks for the reply and suggestions.

The iron cross is fine if it is completely different from anything in your imagery (to avoid false detections) but the beauty of the targets currently generated by PhotoScan is that they are unique and result in numbered markers that can be coordinated in an automatic workflow.

If the iron cross style target is potentially better than the circular pattern then perhaps something based on triangles would be more matchable from altitude.  Also, I use between 20 and 80 GCPs and so I would think that if the pattern was say 80-100 unique designs that would be enough. Fewer markers might allow the design to be simpler and therefore more uniquely matchable. Ideally an A3 print is as large as they should be... but I understand that image resolution plays a key role.


General / Re: Automatic GCP identification
« on: November 23, 2012, 06:29:31 AM »
Hi again,

I imported a list of all markers and their associated real-world coordinates. I manually identified markers in one photo and did an alignment and then tried to detect markers again... it did not seem to help.

What I was hoping was that the knowledge of marker location in the point cloud (and one photo) would be used in the detection algorithm to try and find other markers in the other photos and also markers that have not been found but have a coordinate in real world space... am I expecting too much do you think?  Should this be a feature request?

If anyone at Agisoft has some explanation of marker detection that might guide me to a viable workflow that would be much appreciated.


General / Re: Automatic GCP identification
« on: November 22, 2012, 05:23:04 AM »

As Arko suggested I tested the printed markers (12bit 20mm markers, 1 per page). I printed 10 different markers (about every 50th page) at 3 different page sizes (ie 30 IDs ranging from 1-2047). I printed the first 10 on A4 as generated and I also printed them as large as I could get them on A4 (135% of generated A4 size) and on A3 (192%).

I placed ten of each size and took photos from 30m, 50m, 80m, 100m and 120m (our common UAV flying heights).  I used a Canon 550D DSLR (18mp) with a 20mm lens. I then imported each set (ie a separate project for each distance) and asked PhotoScan to detect the markers (I left the tolerance at 50)...

  • At 30m: it got most of the A3 markers and a couple of the A4 markers. It falsely detected marker 1 in a number of weird places so I would avoid the really simple patterns. When I increased the tolerance it detected a couple more A3 markers (+2 at 70 and +1 at 90)
  • At 50m: it incorrectly found only 1 marker - Marker 1 in the sky and on a hubcap of a car. Increasing the tolerance made no difference.
  • Needless to say it found nothing in the more distant photography.

So, the conclusion I draw from this initial test is that the 20mm marker pdf generated by photoscan (one marker per A4) is for close range photography (<12-15m) and if the markers are enlarged nearly 200% and printed on A3 you might get a 30-35m range... but I would choose higher ID markers to avoid false detections based on simple marker geometry.  The detection is very sensitive to occlusion so the markers need to be completely visible.

Sadly, as it is now we cannot see any point in pursuing these markers for our workflow as they are only useful when we fly at 30m or less and we often want to fly higher (our ceiling is 120m in Australia). If the detection algorithm could be improved so that such large markers are not needed then they may become viable.

I will now go and trial manually identifying markers in one photo first and seeing if that helps the detection algorithm...

Steve Harwin
PhD Candidate
University of Tasmania
School of Geography and Environmental Studies
Hobart, Tasmania, Australia

Pages: [1]