Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - an198317

Pages: 1 [2] 3
16
Alexey,

I was trying to put all of those parameters in the code, but I think loading a pre-saved xml calibration file should work just fine.

So this part should be before the photo alignment, right?

I have code like:
new_chunk.cameras.add(path)
new_chunk.enabled=true
doc.chunks.add(new_chunk)

And after those 3 lines, the camera calibration should be inserted in, and then the code will be matching photos, llike:

doc.activeChunk.matchPhotos(accuracy="high", preselection="generic")
doc.activeChunk.alighPhotos()

Is that right place to put in the camera calibration parameters?

Thanks a lot!
Nan

17
Hi Alexey,

Thank you so much for your super fast reply. I had some code written down but I don't think it's right. So Can you give me some example codes like if I want to input fx, fy, cx, cy, k1, k2, and k3, and also like you mentioned I want to "Fix calibration"?

Thank you so much for your time!!

Nan

18
Hi all,

We finally found a good set of camera calibration parameters for all of our cameras. Now I want to implement this process into my Python pipeline so we can make 3D models completely automatically.

Searching around, I didn't find a good example on how to write the script that the parameters can be input before aligning the images together.

And if I relate Python with PhotoScan Pro interface, the "Type" under "Initial" tap of Camera Calibration interface must be "Precalibrated", and the "Fix Calibration" needs to be selected as well.

Thank you all very much for any of your input.
Nan

19
General / The High-end workstation or server for Agisoft PhotoScan Pro?
« on: November 14, 2013, 07:50:03 PM »
Hi guys,

We got enough money to get a high-end workstation or server for processing our 3D model. We are looking at some high-end Dell server that we can put multiple processors (CUPs) in there and each processor can have 8+ cores. OS will be Windows Server 2008 or 2012

So I'm wondering can PhotoScan Pro really utilize that computer power and be compatible with Windows Server OS?

If not, what's the best solution (configuration) for us to have a high-end workstation or server that Agisoft can be run fast?

Nan

20
General / Questions about Camera XYZ coordinates and world coordinates
« on: October 18, 2013, 06:47:04 PM »
Hi all,

The more I used Agisoft PhotoScan Pro, the less I feel like I know. So my question is what's the unit of the world coordinate system of one 3D model if I don't have GPS/georeference information? Does the world coordinate system is related image pixel? If so, what's the converter factor between those two.

And also when I export the camera information, is the XYZ values are for the principal points or focal points?

Thanks in advance,
Nan

21
Python and Java API / Re: Python script for exporting model??
« on: September 04, 2013, 12:05:01 AM »
Thanks, Alexey for your great information. I really hope those options could be implemented in the next updates. I think it's gonna be beneficial for a lot of users eventually. And I am looking forward to see the next update and final 1.0.0 version!

Nan

22
Python and Java API / Re: Python script for exporting model??
« on: September 03, 2013, 08:34:51 PM »
Hi Alexey,

That's a big unfortunate to us. We just found out we can utilize ply model really well than point cloud. I am wondering what time the next update will be released? As you know, my project is heavily relied on PhotoScan Pro.

Thanks a lot!
Nan

23
Python and Java API / Python script for exporting model??
« on: September 03, 2013, 05:35:41 PM »
Hi all,

I want to add a export model routine in my workflow to export model as a ply file with normals and colors. I didn't find there is such a routine for Python script. But I did find "save". There is export_normals, but there is no export vertex colors for ply file.

When the ply file is exported, I don't want choose "Binary Encoding" option.

How can I do this in the PhotoScan Pro??

Thanks in advance!
Nan

24
Hi,

We are having many cameras that are stationary to shoot plants over time. Both cameras and plants were never moved during this period of time. Then we built a series of time-series plants model to detect plant leaves movement.

So is there any way I can specify the viewing angle and zoom-in level so every time I open a model that I can make sure I am looking at those plants at the same angle and the same zoom-in level?

Thanks,
Nan

25
Hi Alexey,

Thank you so much about your fast response! You code is sort of working for me, but there are some small problems I still need to fix.

Question 1: I have noticed this since Day I started using PhotoScan Pro. When I do manual control Photoscan pro for exporting orthophoto, if I choose Top XY for Projection Plane, the orthophoto is not the view from the camera (above ground) shooting down, but the view seems like looking at the orthophoto from beneath the ground surface. So when I choose "Bottom XY", the orthophoto is shooting down from the air.

Do I misunderstand something here? What's the transformation matrix for "bottom XY"?

Question 2: I was using 4096 as buildTexture height and width. And I also used 4096 as exportOrthophoto, the saved orthophoto is 8 pieces. Then I changed exportOrthophoto  blockw and blockh from 4096 to 16384, the saved orthophoto is one piece with higher resolution.

So I was wondering is there no relationship between texture dimension and orthophoto dimension? And if I use 16384 as my exportOrthophoto blockw and blockh, will this give me the highest resolution.

And I am not sure how to use dx and dy settings here. What is 0.01 for?

Question 3: Also when I manually export orthophoto and use 16384 as "Max Dimension", the saved orthophoto is cropped. So how could I have this function in my code so I can crop the blank area?

Here is my code for buildTesture and exportOrthophoto:

#Building the texture of the model.
doc.activeChunk.buildTexture(mapping="adaptive", blending="average", width=4096, height=4096)

#export orthophoto.
        if not doc.activeChunk.transform:
            doc.activeChunk.transform=PhotoScan.Matrix([[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]])

        proj=PhotoScan.Matrix([[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]])
        doc.activeChunk.exportOrthophoto(Path+"ortho"+".tif", format="tif", blending="average", projection=proj, blockw=16384, blockh=16384, write_kml=False, write_world=False)


Thank you so much for your big help, Alexey!!
Nan


26
Python and Java API / Issue of Python scrip for exporting Orthophoto
« on: June 11, 2013, 07:20:59 PM »
Hi,

I am building a Python workflow for making models automatically. When I tried to add exportOrthopho in my code, there is a problem. It seems PhotoScan Pro was trying to build a orthophoto, but the info in PhotoScan Pro Console showed "Mosaic 0x0", and never saved the orthophoto.

My code is:
doc.activeChunk.exportOrthophoto(Path+"ortho.tif", format="tif", blending="average", blockw=4096, blockh=4096, write_kml=False, write_world=False)

Does anyone know what's wrong with my code?

And I also want to save the model as "From bottom XY" in Projection Plane, how can I set this up in my code?

Thanks a lot!
Nan

27
General / Re: Tetracam multispectral imagery processing
« on: June 05, 2013, 08:00:14 PM »
Hi ricardo9856,

I am glad to learn you are giving up on Tetracam that I gave up long time ago. My lab works very closely with MaxMax, which is the company I mentioned in the previous response. We had them convert 4 Canon S95 and S100 cameras to NIR-G-B and 2 Canon XSi cameras to NIR camera.

My lab and Maxmax just came up an idea to convert Canon 6D Full frame cam to NIR-R-G camera. And we are still testing it. I will post more once we have the test results.

Nan

28
Python and Java API / Re: Question on setting user calibration
« on: June 04, 2013, 08:16:52 PM »
Hi sarko,

I am interested in this user_calib argument as well. I noticed you only used cenX, cenY, and focal length. Which kind of other information we can put into user_calib? Can we put camera position XYZ and orientation angles in user_calib as well?

Thanks,
Nan

29
General / Re: Tetracam multispectral imagery processing
« on: May 30, 2013, 02:25:25 AM »
We had a company to convert a Canon DSLR to NIR-R-G camera for us. The idea is open the Blue channel  for NIR and keep R and G. But the problem is R and G channels are also effected. We had the same company converted a false NDVI camera: NIR-G-B bands.

I studied the Tetracam cameras, the resolution is too low and some camera technology is too behind based on our findings. I am glad to learn if I am wrong about this camera.

Nan

30
General / Re: No depth maps for some camera positions
« on: May 30, 2013, 02:18:02 AM »
This is a REALLY good post, RalfH!! I am having the similar issue with Photoscan Pro. If you remember the posts I had before, we have a camera setup for taking images for small plants, and the overlap between images are more than 50%.

But the "Align camera" seems to be a little strange to me, that is some time Photoscan can't align images together (the positions are completed wrong), but with the same set of images, Photoscan can align them perfect 2nd time, but it might not be good for 3rd time....

This issue happened to me a lot and seems to be "random". So I really want to know what is causing it and how to fix it.

Nan

Pages: 1 [2] 3