Agisoft Metashape
Agisoft Metashape => Python and Java API => Topic started by: vik748 on September 02, 2019, 06:12:35 AM
-
Hi,
I would like create an animation looking at the 3D model from the image camera positions. ie. the positions we can when we right click and image and click "Look Through".
When I tried to add these positions to the animation and check the transform, I see some difference in the numbers. Could someone suggest how one might be able to generate the animation camera track file containing all the image camera positions? A script would be nice, but if you can point out the math required, I can comeup with the python script.
Thanks in advance. Cheers!
-
Hello vik748,
Please check the following script that will create new camera track, where each keyframe corresponds to the location/orientation of every aligned camera in the chunk:
import Metashape
chunk = Metashape.app.document.chunk
chunk.addCameraTrack()
track = list()
for camera in list(chunk.cameras):
if (camera.type == Metashape.Camera.Type.Regular) and camera.transform:
keyframe = chunk.addCamera()
keyframe.type = Metashape.Camera.Type.Keyframe
keyframe.transform = camera.transform
track.append(keyframe)
chunk.camera_tracks[-1].keyframes = track
chunk.camera_track = chunk.camera_tracks[-1]
-
Hi Alexey,
it seems keyframe transform has Y and Z axis inverted relative to camera transform, so that:
keyframe.transform = camera.transform
has to be modified to:
keyframe.transform = camera.transform * Metashape.Matrix().Diag([1, -1, -1, 1])
so that each keyframe looks in direction of model instead of opposite direction..... see https://www.agisoft.com/forum/index.php?topic=11146.0
-
Hi Alexey,
it seems keyframe transform has Y and Z axis inverted relative to camera transform, so that:
keyframe.transform = camera.transform
has to be modified to:
keyframe.transform = camera.transform * Metashape.Matrix().Diag([1, -1, -1, 1])
so that each keyframe looks in direction of model instead of opposite direction..... see https://www.agisoft.com/forum/index.php?topic=11146.0
Hi Paulo,
Can this script be used on Photoscan 1.4?
-
I do not think so... seems keyframe concept was introduced with Metashape 1.5
-
HI,
When creating camera track for animation based on replication of image camera positions (like script discussed in this post), is there a way to make camera 0 to look at camera 1, camera 1 one to look at camera 2 etc ... so to put viewpoint of camera along the track, looking at next camera position ?
thanks
Peter
-
Hello Peter,
following code will create an animation where each animation track camera will have same position as corresponding image camera but looking at next camera with roll equal to 0:
import Metashape, math
chunk = Metashape.app.document.chunk
track = chunk.addCameraTrack()
track.label = "Camera Track Path"
keyframes = list()
T = chunk.transform.matrix
cameras = [camera for camera in chunk.cameras if camera.type != Metashape.Camera.Type.Keyframe and camera.transform] # only regular aligned cameras not keyframe
for i in range(0, len(cameras) - 1):
cc1gc = T.mulp(cameras[i].center)
m = chunk.crs.localframe(cc1gc)
if 'PROJCS' not in chunk.crs.wkt and 'GEOGCS' in chunk.crs.wkt: #case of GEOGCS
cc1 = m.mulp(T.mulp(cameras[i].center))
cc2 = m.mulp(T.mulp(cameras[i + 1].center))
else:
cc1 = chunk.crs.project(T.mulp(cameras[i].center))
cc2 = chunk.crs.project(T.mulp(cameras[i + 1].center))
yaw = math.atan((cc2.x-cc1.x)/(cc2.y-cc1.y))*180.0/math.pi
pitch = math.asin((cc2.z-cc1.z)/(cc2-cc1).norm())*180.0/math.pi
if (cc2.y-cc1.y) < 0 :
yaw = yaw + 180
R = Metashape.Utils.ypr2mat(Metashape.Vector((yaw,90+pitch,0))) # Rotation matrix constructed from (yaw, pitch, roll) vector
R = Metashape.Matrix( [[m[0,0],m[0,1],m[0,2]], [m[1,0],m[1,1],m[1,2]], [m[2,0],m[2,1],m[2,2]]]).t() * R * Metashape.Matrix().Diag((1, -1, -1))
row = list()
for j in range(0, 3):
row.append(Metashape.Vector(R.row(j)))
row[j].size = 4
row[j].w = cc1gc[j]
row.append(Metashape.Vector([0, 0, 0, 1]))
M = Metashape.Matrix([row[0], row[1], row[2], row[3]])
pos = chunk.addCamera()
pos.type = Metashape.Camera.Type.Keyframe
pos.transform = T.inv() * M
keyframes.append(pos)
track.keyframes = keyframes
chunk.camera_track = track
print("Done")
Hope this can be useful,
-
Hello Paulo,
This is great, exactly what I need. impressive !
Many thanks !!
Peter
-
Hello Paulo,
FYI - It looks like the camera's are not really looking towards the next camera yet (see attachment)
I could not immediately figure out the math to find out what is wrong.
will dig in some more - your script is excellent starting point anyway !
thanks
Peter
-
Peter,
is your project referenced in GEOGraphic Coordinate System (GEOGCS with latitud, longitud in degrees) eg. WGS84 or PROJected Coordinate System (PROJCS with Easting, Northing in meters or feet) eg. WGS84/UTM14 ?
If it is referenced in GEOGCS, please look at updated code in my previous post... it corrects calculations in case of Geographic CRS...
-
Hi,
It's NAD83 / UTM zone 10N (EPSG::26910)
(It's project default - at this moment, it's a don't care)
Peter
-
Hi Peter,
it is perplexing. I ran the code on some small project referenced in NAD83 / UTM 11 and got appropriate results...
Other example shows camera track created from 8 spherical (360) images taken inside a church with Local coordinates (XY plane corresponds to ground floor).
Maybe on your side, you could check if your X East and Y North coordinates are compatible with UTM northern hemispere coordinates...X in 200 000 to 800 000 range and Y in 0 to 10 000 000 range. If your X coordinates are in -120 to -126 range then your CRS was wrongly defined as UTM 10 N when in fact coordinates correspond to WGS84 or NAD83 (lat, lon).
-
Hello Paulo,
do you have any idea how we can use or adapt your script to apply modification of animation from cameras enabled in a chunk, and not using POSE of disabled cameras of the chunk.
in other word, do not consider disabled cameras of chunk when applying your script.
thank you.
antoine
-
Actually the answer can be found here
https://www.agisoft.com/forum/index.php?topic=10829.msg48926#msg48926