Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jeremyeastwood

Pages: 1 2 [3] 4
31
Python and Java API / project not saving properly
« on: January 23, 2015, 10:46:09 PM »
I've just upgraded my script to 1.1.0 and it seems to run fine and produce all the required outputs, however it isn't saving the project properly (it produces a blank project.psz).  I've included my full script below - any ideas greatly appreciated.

Code: [Select]
app = PhotoScan.Application()
doc = PhotoScan.Document()
chunk = PhotoScan.Chunk()
chunk.addPhotos(image_list)

with open(quality_path, mode='w') as fd:
    chunk.estimateImageQuality()
    n = len(chunk.cameras)
    for i, image in enumerate(image_list):
        quality = chunk.cameras[i].frames[0].meta["Image/Quality"]
        fd.write('{image} {quality}\n'.format(image=image.split('/')[-1], quality=quality))

chunk.loadReferenceExif()
coord_system = PhotoScan.CoordinateSystem('EPSG::4326')
chunk.crs = coord_system
accuracy = PhotoScan.HighAccuracy
chunk.matchPhotos(accuracy=accuracy, preselection=PhotoScan.ReferencePreselection)
chunk.alignCameras()

# transform bounding box to deisred mapping region
reg = chunk.region
trans = chunk.transform.matrix
newregion = PhotoScan.Region()

# Set region center:
center_geo = PhotoScan.Vector([bbox_center[0], bbox_center[1], 0.])  # uses existing region height
v_temp = chunk.crs.unproject(center_geo)
v_temp.size = 4
v_temp.w = 1
centerLocal = chunk.transform.matrix.inv() * v_temp
centerLocal.size = 3
newregion.center = PhotoScan.Vector([centerLocal[0], centerLocal[1], reg.center[2]])  # uses existing region height

# Set region size:
# generate scale factor
rot_untransformed = PhotoScan.Matrix().diag([1, 1, 1, 1])
rot_temp = trans * rot_untransformed
s = math.sqrt(rot_temp[0, 0] ** 2 + rot_temp[0, 1] ** 2 + rot_temp[0, 2] ** 2)

# scale desired size in metres to chunk internal coordinate system
inter_size = PhotoScan.Vector([0, 0, 0])
geo_size = PhotoScan.Vector([bbox_size[0], bbox_size[1], 0])  # uses original chunk region z size
inter_size = geo_size / s
newregion.size = PhotoScan.Vector([inter_size[0], inter_size[1], reg.size[2]])

#  rotate region bounding box
SinRotZ = math.sin(math.radians(bbox_rot))
CosRotZ = math.cos(math.radians(bbox_rot))
RotMat = PhotoScan.Matrix([[CosRotZ, -SinRotZ, 0, 0], [SinRotZ, CosRotZ, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]])
v = PhotoScan.Vector([0, 0, 0, 1])
v_t = trans * v
v_t.size = 3
m = chunk.crs.localframe(v_t)
m = m * trans
m = RotMat*m
s = math.sqrt(m[0, 0]**2 + m[0, 1]**2 + m[0, 2]**2)  # scale factor
R = PhotoScan.Matrix([[m[0, 0], m[0, 1], m[0, 2]], [m[1, 0], m[1, 1], m[1, 2]], [m[2, 0], m[2, 1], m[2, 2]]])
R = R * (1. / s)
newregion.rot = R.t()
chunk.region = newregion

chunk.buildPoints(error=1)
chunk.optimizeCameras()
chunk.saveReference(gc_path, "csv")
chunk.buildDenseCloud(quality=PhotoScan.HighQuality, filter=PhotoScan.MildFiltering)  # mild filter as default to improve trees / buildings
chunk.buildModel(surface=PhotoScan.HeightField, source=PhotoScan.DensePoints, face_count=PhotoScan.HighFaceCount)
chunk.model.closeHoles()
chunk.model.fixTopology()
chunk.buildTexture(blending=PhotoScan.MosaicBlending, size=4096)
chunk.exportReport(report_path)
chunk.exportOrthophoto(orthophoto_path, blockw=blockw, blockh=blockh, color_correction=False, blending=PhotoScan.MosaicBlending, write_kml=True, write_world=True, projection=chunk.crs)
chunk.exportDem(dem_path, dx=required_dx, dy=required_dy, blockw=blockw, blockh=blockh, write_kml=True, write_world=True, projection=chunk.crs)
open(complete_file, mode='w').write('saving project')
doc.save(project_path)
coord_system = PhotoScan.CoordinateSystem('LOCAL_CS["Local CS",LOCAL_DATUM["Local Datum",0],UNIT["metre",1]]')
chunk.exportPoints(points_path, source=PhotoScan.DensePoints, format='las', projection=coord_system)
chunk.decimateModel(2000000)  # TODO: check what size model comes out as (200MB limit from sketchfab atm)
chunk.buildTexture(blending=PhotoScan.MosaicBlending)
chunk.exportModel(model_path, texture_format='tif', format='obj', projection=coord_system)

app = PhotoScan.Application()
app.quit()

32
Ideally I'd like to define a series of points (lat, lng), and calculate the volume of the region inside - is this possible through python scripting?

Thanks,

Jeremy

33
Python and Java API / Re: Model rotated when exporting as .obj
« on: October 31, 2014, 10:47:06 PM »
I'm having the exact same problem - it is an issue with loading the model onto other software (and does happen when using the GUI exporter / uploader as well), but a simple fix would be to rotate the model 90deg about the x-axis before export - how can I modify the above code do this?

Thanks

34
Ok, so crop invalid DEM happens automatically when you call exportDem in python at the moment?

Thanks for the quick response!

35
I'd like to emulate the "Crop invalid DEM" option for DEM export than you can select through the gui, through python scripting - is there any way to do this?

Thanks

36
General / Re: Difficulty stitching corn field
« on: October 08, 2014, 07:56:24 PM »
Not other than speeding up file transfer and processing time.  I agree that the full size images would be better (also increasing overlap and shutter speed), however I've run other projects with full size images and still run into difficulties stitching the center of large homogenous areas, which is why I was hoping there might be some other options / settings in photoscan which might improve the results.  I've heard that pix4d has special modes ('alternative processing') built specifically for agricultural mapping which often gives improved results, so was hoping there might be something similar (better?) in photoscan I could try (either a specific mode or a different group of parameter settings etc.)?  Any ideas would be really appreciated.

37
Python and Java API / Re: Setting mapping region
« on: October 04, 2014, 02:45:08 AM »
Hi Alexey,

I've used following the code (taken from the code earlier in this thread - big thanks to you and the other contributors!) to change my bbox x and y dimensions (and center location), however I'm also seeing an unwanted shift in the bbox z center location when the transformation is applied.  This is particularly strange because the chunk region size and center z values (checked via the console) are unchanged after the transformation.  I've attached screenshots of the bbox before and after transformation, showing the console output for the region size and center.

My code:

Code: [Select]
import PhotoScan, math

doc = PhotoScan.app.document
chunk = doc.activeChunk
reg = chunk.region
trans = chunk.transform

newregion = PhotoScan.Region()
map_center = [-122.4004003, 37.7772147]
map_size = [50., 50.]
RotZDeg = 0.0

# SET MAPPING REGION (BOUNDING BOX)

# Set region center:
center_geo = PhotoScan.Vector([map_center[0], map_center[1], 0.])  # uses existing region height

v_temp = chunk.crs.unproject(center_geo)
v_temp.size = 4
v_temp.w = 1
centerLocal = chunk.transform.inv() * v_temp
centerLocal.size = 3

newregion.center = PhotoScan.Vector([centerLocal[0], centerLocal[1], reg.center[2]])  # uses existing region height

# Set region size

#<---- Rotation ---->
rot_untransformed = PhotoScan.Matrix().diag([1,1,1,1])
rot_temp = trans * rot_untransformed 

s = math.sqrt(rot_temp[0, 0]**2 + rot_temp[0, 1]**2 + rot_temp[0, 2]**2)
R = PhotoScan.Matrix( [[rot_temp[0,0],rot_temp[0,1],rot_temp[0,2]], [rot_temp[1,0],rot_temp[1,1],rot_temp[1,2]], [rot_temp[2,0],rot_temp[2,1],rot_temp[2,2]]])
R = R * (1.0 / s)

#<---- Size ---->
inter_size = PhotoScan.Vector([0,0,0])
geo_size = PhotoScan.Vector(map_size)  # uses original chunk region z size
inter_size = geo_size / s 

newregion.size = PhotoScan.Vector([inter_size[0], inter_size[1], reg.size[2]])

# Set region rotation
SinRotZ = math.sin(math.radians(RotZDeg))
CosRotZ = math.cos(math.radians(RotZDeg))
RotMat = PhotoScan.Matrix([[CosRotZ, -SinRotZ, 0, 0], [SinRotZ, CosRotZ, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]])  # just rotate about z-axis
#  rotate region bounding box
T = chunk.transform
v = PhotoScan.Vector([0, 0, 0, 1])
v_t = T * v
v_t.size = 3
m = chunk.crs.localframe(v_t)
m = m * T
m = RotMat*m
s = math.sqrt(m[0, 0]**2 + m[0, 1]**2 + m[0, 2]**2)  # scale factor
R = PhotoScan.Matrix([[m[0, 0], m[0, 1], m[0, 2]], [m[1, 0], m[1, 1], m[1, 2]], [m[2, 0], m[2, 1], m[2, 2]]])
R = R * (1. / s)
newregion.rot = R.t()

# put newregion to chunk
chunk.region = newregion

Any ideas on what I need to do to keep the z size and center fixed would be greatly appreciated.

38
General / Difficulty stitching corn field
« on: October 03, 2014, 11:07:40 PM »
I'm running into difficulties stitching large corn fields from aerial imagery, as shown here:

https://www.dropbox.com/s/fazxwnvjxwqvwun/Screenshot%202014-10-03%2012.35.30.png?dl=0

Photoscan doesn't seem to be able to align the photos in the center of the field (see https://www.dropbox.com/s/oaw9qdza4l646ya/Screenshot%202014-10-03%2012.36.19.png?dl=0), and the parts it does align are done so poorly (e.g. broken / non-straight corn rows), so I was hoping I could get some advice on ways to improve the stitch quality.

The images were taken with a Sony NEX5 (although were downsampled to 2MP), have 60% front and sidelap, and the overall flight path was flown in a crosshatch pattern, such that the entire area was covered twice, with a 90deg rotation between each full coverage (to ensure at least one flight path is not aligned with the corn rows).  Each image was tagged with GPS info when captured, from a GPS devie with +/-3m accuracy.  A sample image can be viewed / downloaded here:

https://www.dropbox.com/s/pbdud431ouldyfq/pict20140923_160305_0.jpg?dl=0

The Photoscan workflow used high accuracy settings in each stage, photo alignment was optimized after the align stage, reprojection error was limited to 1 and the bounding box was fitted to the exact area of interest.  The report can be downloaded here:

https://www.dropbox.com/s/qohjnrsx8r2rbvg/report.pdf?dl=0

I have experimented with different aerial capture approaches - higher overlap, single flight paths (rather than the crosshatch pattern), higher altitudes, etc., and have had some improvements, but still run into the same issues in the center of large homogenous areas such as corn fields, so was hoping there might be improvements I could make to my photoscan workflow?

Any advice / suggestions would be greatly appreciated, and let me know if you want more details on any aspect of the image acquisition / photoscan workflow I've used.

Thanks

39
Thanks Alexey.

Is there any way to constrain maximum values of roll / pitch / yaw during the align so that the model doesn't get rotated (particularly in cases of single line mapping?) - can setting the roll, pitch and yaw values in the ground control data before alignment help with this?

40
Thanks Alexey - that worked perfectly!

41
General / Re: How do you manually upload a 3D model to web viewers?
« on: August 15, 2014, 07:58:56 PM »
Thanks Petrov - exporting in local coords worked nicely.

42
I have set my model in WSG84 early on in my workflow to export the orthophoto mosaic properly, however after the orthophoto export, I'd like to export the 3D model in its local coordinate system (it needs to be in its local coord system to upload properly to web-based browsers).  Currently I am doing this with:

    chunk.exportModel(model_path, texture_format='tif', format='obj', projection=chunk.projection)

however the model doesn't upload properly, so I suspect that "chunk.projection" is still returning the global (WGS84-based) coordinate system.  How can you return the chunk local coordinate system to use for the projection input in the chunk.exportModel function?

Thanks

43
I'd like to export the lat, long, alt, roll, pitch and yaw estimates of the cameras (that can be seen in the ground control pane when running Photoscan manually) after aligning my images, to compare to their exif values - how can I do this?

Also, can I constrain the acceptable ranges for these values so that my model doesn't get shifted / rotated too much during the alignment stage?  Does the camera accuracy value affect this?

Thanks

44
Great, thanks Alexey.  If I do try to stitch a single flight line again in the future, is there a way of constraining the camera orientation / model plane to be in the geographic coordinate system (i.e. cameras facing down the z-axis, model in the x-y plane)?

45
Hi guys,

I'm doing a simple stitch of 4 images with ground control data; the images were all taken with the camera pointing straight down, and line up as shown in the first attachment.

When I align the photos (high accuracy, ground control pair selection), the images match up nicely, but the resulting model has been rotated away from the ground plane by about 90deg (see 2nd attachment).  It looks like Photoscan has estimated that all the images have to be rotated about the vector [1,1,0] by about 90deg for proper alignment, so I was wondering why such a major adjustment has been made (which is clearly physically unrealistic)?

As a consequence of this, the georeferenced orthophoto comes out completely skewed (as it is the projection of the model on the xy plane), while the actual reconstructed scene is fairly accurate (albeit rotated) - see 4th attachment (note the axes).

Are there any settings I can select to fix the model plane / allowable image rotations during align, or is this an issue with image quality etc?

Thanks

Pages: 1 2 [3] 4