Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - varadg

Pages: 1 [2]
16
Thanks Alexey. So, if this information is provided within the EXIF data for all images, I assume it will speed up the computation process?

Also, currently, I have the Lat/Long/Alt & FocalLength available. However, it is not embedded as EXIF in the images and exists as a separate stream. Is it possible to use the API to use this data separately for the corresponding images during the alignment phase. Or is it possible to write code to embed this as EXIF data separately into each image? Are there any tools you can suggest for the same?

17
Hi. Can someone educate me as to what EXIF data / metadata can / is used by Agisoft in the process of generating an orthophoto from aerial images? I expect it would be helpful to have information about the focal length, camera calibration parameters, geolocation data etc. Can someone provide a comprehensive list?

18
Python and Java API / Re: Select all images for alignment using API
« on: March 02, 2017, 03:22:03 PM »
Hi Alexey. I tried your suggestion. This is the code I used -

Code: [Select]
doc = PhotoScan.app.document
chunk = doc.addChunk()
chunk.addPhotos(images)

# image matching and alignment
for frame in chunk.frames:
    frame.matchPhotos(accuracy=PhotoScan.HighAccuracy, preselection=PhotoScan.GenericPreselection, keypoint_limit=40000, tiepoint_limit=1000) # HighestAccuracy

chunk.alignCameras()

realign_list = list()
for camera in chunk.cameras:
      if not camera.transform:
            realign_list.append(camera)
chunk.alignCameras(cameras = realign_list)


I have attached the output from the GUI & the API. As you can see the output from the GUI is still quite different from what I get from the API. What am I doing wrong?

For reference, this is the full orthophoto generation pipeline I'm using -

Code: [Select]
doc = PhotoScan.app.document
doc.save(project_psx_path)

# add photos
chunk = doc.addChunk()
chunk.addPhotos(images)

# image matching and alignment
for frame in chunk.frames:
    frame.matchPhotos(accuracy=PhotoScan.HighAccuracy, preselection=PhotoScan.GenericPreselection, keypoint_limit=40000, tiepoint_limit=1000) # HighestAccuracy

chunk.alignCameras()

realign_list = list()
for camera in chunk.cameras:
      if not camera.transform:
            realign_list.append(camera)


chunk.alignCameras(cameras=realign_list)

# dense point cloud

chunk.buildDenseCloud(quality=PhotoScan.HighQuality,filter=PhotoScan.MildFiltering) # quality=PhotoScan.UltraQuality

# build mesh
chunk.buildModel(surface=PhotoScan.HeightField, interpolation=PhotoScan.EnabledInterpolation, face_count=PhotoScan.MediumFaceCount, source=PhotoScan.DenseCloudData)   # classes provide control on which kind of terrain we are mapping

# build UV
chunk.buildUV(mapping=PhotoScan.OrthophotoMapping)

# build texture
chunk.buildTexture(blending=PhotoScan.MosaicBlending, size=8192)

doc.save()

# build orthophoto
chunk.buildOrthomosaic()

# export orthophoto
orthophoto_path = "{0}/{1}_ortho.tif".format(project_files_path, project_name)
chunk.exportOrthomosaic(path=orthophoto_path, jpeg_quality=99)

# resize for easier viewing
orthophoto = cv2.imread(orthophoto_path)
orthophoto_resized = cv2.resize(orthophoto, (0, 0), fx=0.5, fy=0.5)
cv2.imwrite(orthophoto_path, orthophoto_resized)

# report_path = "{0}/{1}_report.pdf".format(project_files_path, project_name)
# chunk.exportReport(path=report_path, title="API Processing Report")

t2 = time.time()

print("Time Taken : {} seconds".format(t2-t1))

19
Python and Java API / Re: Select all images for alignment using API
« on: March 02, 2017, 12:52:48 PM »
Hey Alexey. Thanks for the reply. Just to clarify, I add the code you've supplied after the first alignCameras call right?

Also, if I want to re-align all cameras regardless, I just do -

Code: [Select]
chunk.alignCameras(chunk.cameras)


20
Python and Java API / Select all images for alignment using API
« on: March 02, 2017, 09:27:05 AM »
For the first step in my processing workflow, I'm using the following API calls -

Code: [Select]
doc = PhotoScan.app.document
chunk = doc.addChunk()
chunk.addPhotos(images)

# image matching and alignment
for frame in chunk.frames:
    frame.matchPhotos(accuracy=PhotoScan.HighAccuracy, preselection=PhotoScan.GenericPreselection, keypoint_limit=40000, tiepoint_limit=1000) # HighestAccuracy

chunk.alignCameras()

The same procedure in the GUI yields 3000 points as shown in the attached image (agisoft_1.png). As is evident, only the images with the green ticks are used in the alignment.

However, in the GUI, I can manually select all images (by pressing Ctrl + A) and then proceed with image alignment (agisoft_2.png). This yields 4000+ points for further processing. And the output is really good for the 2nd process, and very distorted for the 1st.

I wanted to know what change I can make in my API call to select all images and thereby get the same performance.

21
Okay. So I guess the only way to get stable alignment is using geotags then? Also, are there any changes / improvements that can be made to my script?

Finally, what about the option of directing usage of all images rather than depending on the automatic selection for alignment? I have attached a screenshot of what I men. As you can see, not all of the images have ticks on them. After I do Ctrl+A on all the photos in the window, I get a very different result.

22
Is it possible to schedule a call where I can show you the difficulty I am facing?

23
Hi Alexey. I appreciate that this isn't the best input and yes I am restricted to 2MP only due to legal regulations on data acquisition. This data is from a single flight line only and, yes, I am working to get EXIF information and geotags.

However, this does not answer the question of why I get different output on the same dataset, using the same parameters in the API & the GUI. If the problem is with the data, why is the output of the GUI better than what I get with the API? I am happy with the output I get from the GUI and I'd like to be able to get the same from the API.

If that cannot be addressed, I would at least like to know the answer to the last question I had posed in my mail. While processing through the GUI, if I do automatic image alignment, only 50 images out of the 180 have green tick marks on them - I assume that means that they are the ones used for further processing. However, I can manually use Ctrl+A to choose all the images and then the output is much better, the number of points detected is larger. How can I ensure all images are used for processing using the API?

24
Hi Alexey. Any updates on this?

25
The exportReport command crashed. However, I have sent all the other details to support@agisoft.com

Please let me know your analysis as soon as possible.

26
And for the API?

27
Sure. Can you tell me how these reports are generated?

28
Hi Alexey. Yes, that is correct, I do not have accurate georeference for the images I am using. But I am using the same images in both GUI & API.

29
Python and Java API / Different output from API process and GUI workflow?
« on: February 22, 2017, 02:11:48 PM »
I am using the following script to automate my workflow of generating an orthomosaic for a set of 180 aerial photos -

Code: [Select]
doc = PhotoScan.app.document
doc.save("projects/frame_2.psx")

# add photos
chunk = doc.addChunk()
chunk.addPhotos(images)  # images is a list of image paths from a directory

# image matching and alignment
for frame in chunk.frames:
    frame.matchPhotos(accuracy=PhotoScan.HighAccuracy, preselection=PhotoScan.GenericPreselection, keypoint_limit=40000, tiepoint_limit=1000)
chunk.alignCameras()

# dense point cloud

chunk.buildDenseCloud(quality=PhotoScan.HighQuality,filter=PhotoScan.MildFiltering)

# build mesh
chunk.buildModel(surface=PhotoScan.HeightField, interpolation=PhotoScan.EnabledInterpolation, face_count=PhotoScan.HighFaceCount, source=PhotoScan.DenseCloudData, classes=[PhotoScan.Created])   # classes provide control on which kind of terrain we are mapping

# build UV
chunk.buildUV(mapping=PhotoScan.OrthophotoMapping)

# build texture
chunk.buildTexture(blending=PhotoScan.MosaicBlending, size=8192)

doc.save()

# build orthophoto
chunk.buildOrthomosaic()

# export orthophoto
chunk.exportOrthomosaic(path="projects/frame_2.tif", jpeg_quality=99)

However, at the end of processing, I noticed that the output obtained after processing the same dataset through the API  is drastically different from the output I obtain by following the orthophoto generation process (detailed here - http://www.agisoft.com/index.php?id=28) in the GUI.

I am running the Pro 30-day trial version on 64-bit Linux system with 8GB RAM & a GeForce GTX 740 graphics card.

Any idea why this would be happening?

30
Python and Java API / API commands for corresponding GUI interactions?
« on: February 20, 2017, 03:34:50 PM »
Hello. I'm trying to evaluate PhotoScan Pro in headless mode on a server. So, I'm trying to do all the steps that I normally do using a GUI. I had generated orthophotos using the GUI by following the tutorial provided here - http://www.agisoft.com/index.php?id=28

But I cannot figure out the corresponding API calls to make. For eg.: how do I set the parameters in the OpenCL preferences as shown in the tutorial? What is the Python API command for the same? Similarly, what is the API call to add multiple images to a project and align them?

Please help.

Pages: 1 [2]