Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - ozbigben

Pages: [1]
1
Python and Java API / Setting bounding box via 2 markers
« on: July 18, 2018, 07:01:37 AM »
Hi all

I'm refining a workflow for doing high resolution scanning using photogrammetry and am at the stage of adding some more automation. Most of the generic processing stuff I can probably work my way through but I have one things that's a bit more complex that's similar to  http://www.agisoft.com/forum/index.php?topic=8060.msg38554.

I'm using an L-shaped target with coded CPs to set the orientation of the image. This is working nicely and I'm looking to add cropping to this.  Having looked through a few posts this might be best done by defining the bounding box after alignment so that only the required area is processed.  I detect markers and import cords prior to alignment so rotation of the model is not required. Bounding box would need to be aligned to the coordinate system (already a script for that) and then positioned/sized to match CPs defining the top/left and bottom right corners.  CP labels would be constant and named during detection "target NNN"  Height of bounding box is OK after alignment

Current setup is attached. Tope Left CP would be added to L-shaped scale bar, bottom right would be a single CP placed manually. I could hard -code the top/left cords but the script would be a little more re-useable getting the cords from a CP.


Any help would be greatly appreciated and happy to share details of the rest of the setup.  Using a Hasselblad H5D60 to scan at 600dpi up to ~2x4m including foldouts from very large books which don't lie flat.

2
General / Anyone tried Mars stereo pairs?
« on: July 03, 2014, 01:49:21 AM »
Just curious if anyone's tried processing images from here: http://www.uahirise.org/stereo_pairs.php

3
General / Bracketed exposure models
« on: June 14, 2014, 03:37:01 AM »
As I mentioned in an earlier post I was thinking of experimenting with bracketed exposures for shooting buildings, with the view of filling in holes in the darker recesses of the building with points derived from images with higher exposures.  I'm not entirely sure whether this is sensible or not, but I shot a quick test yesterday and the result was quite promising.

For now I'm processing the different exposure sets separately and then aligning and merging the point clouds. I'm only shooting one extra exposure (+2 EV). The first test added some extra noise along edges but I only did a quick mask and I can see why this happened. I've set up a separate Photoshop action to mask these images, excluding everything at the top end of the pixel value range (starting with a max of 245 but that's just a number I pulled out of the air)

The initial model from the normal exposures is here: https://sketchfab.com/models/d24123350ae2440086856d1c17193667

4
General / How far can a GoPro go...
« on: June 09, 2014, 02:14:14 AM »
First off let me start by saying that I know a GoPro is not the ideal camera for photogrammetry and I wouldn't consider using it in future. I bought mine for cycling/hiking videos and I've taken a few image sets to try out in 123D Catch before getting Photoscan.  As part of getting familiar with PS I've been seeing how good a model I could get using a GoPro and the standard version of PS. It's taught me a lot and I thought I'd share my findings.

1. Lower your expectations  ;)

2. Forget "wide". The "normal" fov may only result in a 7mp image, effectively being just a crop of the 12mp wide image, but it avoids getting a lot of distant stuff in your sparse point cloud which you end up cropping out, reducing the number of points for the next steps in the workflow.

3. You need a lot more images and need to photograph a lot closer than you might think. I shot these sets on timelapse at 0.5secs per frame

4. Mask the sky.  I set up a photoshop action to mask out most of the sky before starting. I try and shoot on lightly overcast days so the sky is usually white.
  • Add a white border to the image
  • Magic wand select the top left corner with a tolerance of about 20
  • Invert the selection
  • Create an alpha channel from the selection
  • Remove the white border
  • Save as TIFF
This provides a reasonably good starting point for creating masks.  Adding the white border lets the selection grow around trees/other objects that extend to the edge of the frame. Might look at using Imagemagick for that as it might be quicker (and simpler)

5. Create a sparse point cloud with at least 500,000 points
Remove "bad" points using gradual selection: Reprojection error 0.5, Reconstruction uncertainty 50 ( I know I've been exploring other values for these settings, but these seem to be consistent for my GoPro image sets)

From there, do everything at as high quality settings as you can cope with, export and clean up the mesh.

Attached a screengrab of the latest test. 360 images (shot as 12mp, wide) around the sculpture in the middle (I didn't chose that number, just a coincidence after removing under-exposed frames)  This is the actual location: http://www.360cities.net/image/sheep-shape-behind-the-arts-centre-melbourne-victoria/#288.71,1.34,63.1

Sheep sculpture: http://studio.verold.com/projects/5394f586372e88020000043d (easier to see the mesh quality without texture)

Tree trunk (~180 images) http://studio.verold.com/projects/5392aa11eef3b50200000077

5
General / Clarification of Reconstruction uncertainty
« on: June 07, 2014, 07:25:15 AM »
I'm working through improving the quality of the dense point cloud and found this helpful explanation.

"Reconstruction uncertainty". In case you have only two cameras and a point is being triangulated by intersection of two rays there is a direction in which the variation for the point position is maximal and another direction with the minimal variation. Dividing one on another (max to min) we've got reconstruction uncertainty value. Mostly this criterion is designed for visualization and estimating the errors. In general this value characterizes the accuracy of positioning points in cloud.

"Reprojection error" demonstrates the accuracy of point positioning and is specified in pixels. We recommend to remove points with huge reprojection errors before optimizing photo alignment.

This has been really useful and is probably worth putting in the wiki.  I'm just trying to properly understand how the numerical value for Reprojection error Reconstruction uncertainty relates to quality (and thus what a practical minimum value would be). Is it relative to the input image size or does it relate to the size of image pattern that PS uses to "recognise" features.

I've managed to remove too many points a couple of times which is handy as I now know what effect that has on the dense point cloud.

6
General / Importing meshes: workflow options.
« on: May 31, 2014, 08:59:19 AM »
Back again...
Reprocessing some old images and discovered importing meshes into PS.  As I'm exploring different workflows at the moment I was wondering if anyone would care to share some of the uses they have for importing meshes back into PS.  Some stuff I've tried so far:
  • Exporting dense point cloud and generating a mesh in MeshLab, Poisson reconstruction with a "high" samples per node to create a smoother mesh, import back into PS to generate texture. Not convinced this was entirely practical and my thinking may have been flawed on this one
  • Exporting mesh for editing/smoothing in MeshMixer. My 3D sculpting skills need work too  ;) (https://www.flickr.com/photos/ben-kreunen/14123270998/)
  • (not tried yet) Texturing laser scans.  Photograph object, create dense cloud in PS and export points, align laser scan to PS point cloud in MeshLab, import laser scan into PS and build texture.
While the last one sounds like a lot of extra work, it's not really. Texturing is one of the weak points of the laser scanners we've looked at and manually aligning/painting photographic textures in ZBrush is a lot harder than it looks (to do well)

7
General / Newbie intro
« on: May 31, 2014, 08:35:09 AM »
Hi all

I know I've posted a bit here already but I figured I may as well introduce myself since I'll back hanging around for a while as I learn more.
I'm the Tech Support Officer for a digitisation service in a uni library (http://digitisation.unimelb.edu.au/). While we don't provide a 3D scanning service as such, we do have some equipment to help other departments try stuff out and pass on what we've learnt to help them get their projects up and running faster. So far this has included fields such as archaeology, art (mostly sculpting students) and medicine.

I'm a scientific photographer by qualification, and have been working with spherical panoramas since they were only cylindrical.  One of the great challenges with working in a uni is that no single research group wants to spend a lot of money on scanning equipment and the politics and bureaucracy of sharing means joint purchases are "unusual".

I've been using Photoscan (standard) for a few months, and have also used some of the early stuctured light scanners (cheap camera and projector, awful software), Autodesk's 123D Catch and we currently have a NextEngine with most of the bells and whistles. I've reached the stage where I'm producing some fairly good results thanks to the docs and tips in this forum, but I know i have a lot more to learn.  I tend to think out loud, and ramble a bit (can you tell?  ;)) but I find it's a useful way of sharing ideas. I have the occasional hair-brained idea so please fell free to correct me if my mouth is ahead of my brain.

Cheers

Ben

Pages: [1]