Agisoft Metashape
Agisoft Metashape => General => Topic started by: Dmitry Semyonov on September 19, 2017, 07:08:03 PM
-
Pre-release version of Agisoft PhotoScan 1.4.0 is available for download.
This pre-release is considered as unstable and is not recommended for production use.
Please use the links below for download:
Agisoft PhotoScan Standard edition
Windows 64 bit (http://download.agisoft.com/photoscan_1_4_0_x64.msi)
Windows 32 bit (http://download.agisoft.com/photoscan_1_4_0_x86.msi)
Mac OS X (http://download.agisoft.com/photoscan_1_4_0.dmg)
Linux 64 bit (http://download.agisoft.com/photoscan_1_4_0_amd64.tar.gz)
Agisoft PhotoScan Professional edition
Windows 64 bit (http://download.agisoft.com/photoscan-pro_1_4_0_x64.msi)
Windows 32 bit (http://download.agisoft.com/photoscan-pro_1_4_0_x86.msi)
Mac OS X (http://download.agisoft.com/photoscan-pro_1_4_0.dmg)
Linux 64 bit (http://download.agisoft.com/photoscan-pro_1_4_0_amd64.tar.gz)
PhotoScan 1.4.0 Change Log (http://www.agisoft.com/pdf/photoscan_changelog.pdf)
PhotoScan 1.4.0 Python API (http://www.agisoft.com/pdf/photoscan_python_api_1_4_0.pdf)
Please let us know if you have any problems using this version.
Please post problems and/or questions concerning other PhotoScan version in separate topics.
This topic will be removed when the final 1.4.0 version is released.
Note
Project files created with 1.4.0 version can't be opened with earlier versions.
EDIT:
Updated to version 1.4.0 build 5585.
-
CONGRATULATIONS, AGI TEAM! SEEMS A FORMIDABLE STEP AHEAD! THANKS FOR LISTENING!
Best regards,
GEOBIT
-
Congrats Agisoft Team, Keep going, u r the Best. Hope next release contain a speed on the Process time.
Mohamed
-
In Ortho Menu what is the use of (Rectangular, Circle, Free form Selection) when i open Ortho mode it dosen't select anything?
Mohamed
-
Hello Mohamed,
Currently selection instruments in the Ortho view mode allows to select markers and shapes (actually, it worked in the same way in 1.3).
-
Oky thanks.
Also for the delighting new tool looks very promising and the result is quit good still need some improvments with hard shadow i guess.
Best.
Mohamed
-
Great work... very good new features...
However in survey Statistics window the size of ellipse error for points or photos is not correct. In attached image, it shows at 1 to 1 scale and comparing to distance scale would mean point or photo center errors of many meters while they are actually a few cm! And if you change sliding scale, size of reference ellipse does not change....
-
Why the **ck 1.4 pre even if i define other folder to install removed 1.3 version? :(
But Mesh Refinement is amazing. You finally made this! But because you take one known paper about this method, most of users will did not find any profit comparing to non refined workflow ;)
Mesh refinement required some changes. And can be faster that usual.
-
One biggest problem that Mesh Refinement as usual for Agisoft is not out of core. And 2Gb VRAM not enough for average 2017 years scans.
This is bad, because your users is not rich studios that moved years from Photoscan to more modern tools. But hobbyists that still can have 1Gb or 2Gb VRAM GPU.
>4Mln poly mesh required ~2,3Gb of RAM and do not allow make refinement.
-
Great job, Agisoft team.
I haven't tested version 1.4.0 yet, but the change log looks quite impressive, especially:
- Animation pane for fly through videos #bigthumbsup
- 3Dconnexion Space Mouse support #bigthumbsup
- Radiometric normalisation based on cam exp parameters and irradiance sensor #bigthumbsup
- Cesium 3D tiles format support for tiled model export #bigthumbsup
- Survey statistics
- Offset estimation for fixed camera rigs
... to just name a few.
Impressive work.
Regards,
SAV
-
However in survey Statistics window the size of ellipse error for points or photos is not correct. In attached image, it shows at 1 to 1 scale and comparing to distance scale would mean point or photo center errors of many meters while they are actually a few cm! And if you change sliding scale, size of reference ellipse does not change....
Hello Paul,
The scale multiplier X1 means that the size of the ellipse for each camera is in 1:1 compared to scalebar in the bottom of the Survey Statistics dialog.
There is however, the minimal radius for the camera location, that is required to demonstrate the elevation error (by color).
-
That's funny. Did you just copy and paste code from OpenMVS RefineMesh module for your mesh refinement?
Even if user decimate mesh to a deep **it and will play with 50images, such heado-on solution will always have not enough memory. At least OpenMVS can use CPU and RAM. And 300+ images still possible to process on 64Gb RAM.
2017-09-20 16:28:32 RefineMesh: quality = Ultra high, iterations = 20, smoothness = 1
2017-09-20 16:28:32 Using device: GeForce GTX 960, 8 compute units, 2048 MB global memory, compute capability 5.2
2017-09-20 16:28:32 max work group size 1024
2017-09-20 16:28:32 max work item sizes [1024, 1024, 64]
2017-09-20 16:28:32 Analyzing mesh detalization...
2017-09-20 16:29:05 Memory required: 1743 Mb + 6 Mb = 1749 Mb
2017-09-20 16:29:05 Stage #1 out of 4
2017-09-20 16:29:05 Subdividing mesh...
2017-09-20 16:29:07 Memory required: 27 Mb + 6 Mb = 33 Mb
2017-09-20 16:29:07 Loading photos...
2017-09-20 16:29:09 loaded photos in 2.642 seconds
2017-09-20 16:29:09 Refining model...
2017-09-20 16:29:12 Iteration #1 out of 20
2017-09-20 16:29:40 Iteration #2 out of 20
2017-09-20 16:29:59 Iteration #3 out of 20
2017-09-20 16:30:07 Iteration #4 out of 20
2017-09-20 16:30:16 Iteration #5 out of 20
2017-09-20 16:30:28 Iteration #6 out of 20
2017-09-20 16:30:49 Iteration #7 out of 20
2017-09-20 16:31:12 Iteration #8 out of 20
2017-09-20 16:31:34 Iteration #9 out of 20
2017-09-20 16:31:54 Iteration #10 out of 20
2017-09-20 16:32:16 Iteration #11 out of 20
2017-09-20 16:32:38 Iteration #12 out of 20
2017-09-20 16:32:56 Iteration #13 out of 20
2017-09-20 16:33:14 Iteration #14 out of 20
2017-09-20 16:33:38 Iteration #15 out of 20
2017-09-20 16:33:57 Iteration #16 out of 20
2017-09-20 16:34:20 Iteration #17 out of 20
2017-09-20 16:34:39 Iteration #18 out of 20
2017-09-20 16:35:00 Iteration #19 out of 20
2017-09-20 16:35:09 Iteration #20 out of 20
2017-09-20 16:35:17 Stage #2 out of 4
2017-09-20 16:35:17 Subdividing mesh...
2017-09-20 16:35:20 Memory required: 108 Mb + 19 Mb = 128 Mb
2017-09-20 16:35:20 Loading photos...
2017-09-20 16:35:23 loaded photos in 2.516 seconds
2017-09-20 16:35:23 Refining model...
2017-09-20 16:35:31 Iteration #1 out of 20
2017-09-20 16:35:58 Iteration #2 out of 20
2017-09-20 16:36:25 Iteration #3 out of 20
2017-09-20 16:37:01 Iteration #4 out of 20
2017-09-20 16:37:42 Iteration #5 out of 20
2017-09-20 16:38:09 Iteration #6 out of 20
2017-09-20 16:38:37 Iteration #7 out of 20
2017-09-20 16:39:05 Iteration #8 out of 20
2017-09-20 16:39:32 Iteration #9 out of 20
2017-09-20 16:39:59 Iteration #10 out of 20
2017-09-20 16:40:26 Iteration #11 out of 20
2017-09-20 16:41:02 Iteration #12 out of 20
2017-09-20 16:41:34 Iteration #13 out of 20
2017-09-20 16:42:08 Iteration #14 out of 20
2017-09-20 16:42:42 Iteration #15 out of 20
2017-09-20 16:43:15 Iteration #16 out of 20
2017-09-20 16:44:33 Iteration #17 out of 20
2017-09-20 16:46:20 Iteration #18 out of 20
2017-09-20 16:46:52 Iteration #19 out of 20
2017-09-20 16:47:25 Iteration #20 out of 20
2017-09-20 16:47:56 Stage #3 out of 4
2017-09-20 16:47:56 Subdividing mesh...
2017-09-20 16:48:07 Memory required: 435 Mb + 72 Mb = 508 Mb
2017-09-20 16:48:07 Loading photos...
2017-09-20 16:48:10 loaded photos in 2.516 seconds
2017-09-20 16:48:10 Refining model...
2017-09-20 16:48:42 Iteration #1 out of 20
2017-09-20 16:50:57 Iteration #2 out of 20
2017-09-20 16:52:44 Iteration #3 out of 20
2017-09-20 17:05:25 Iteration #4 out of 20
2017-09-20 17:07:18 Iteration #5 out of 20
2017-09-20 17:08:58 Iteration #6 out of 20
2017-09-20 17:10:39 Iteration #7 out of 20
2017-09-20 17:12:18 Iteration #8 out of 20
2017-09-20 17:13:58 Iteration #9 out of 20
2017-09-20 17:15:39 Iteration #10 out of 20
2017-09-20 17:17:19 Iteration #11 out of 20
2017-09-20 17:18:59 Iteration #12 out of 20
2017-09-20 17:20:38 Iteration #13 out of 20
2017-09-20 17:22:18 Iteration #14 out of 20
2017-09-20 17:23:57 Iteration #15 out of 20
2017-09-20 17:25:38 Iteration #16 out of 20
2017-09-20 17:27:17 Iteration #17 out of 20
2017-09-20 17:28:57 Iteration #18 out of 20
2017-09-20 17:30:38 Iteration #19 out of 20
2017-09-20 17:32:17 Iteration #20 out of 20
2017-09-20 17:33:57 Stage #4 out of 4
2017-09-20 17:33:57 Subdividing mesh...
2017-09-20 17:34:42 Memory required: 1743 Mb + 287 Mb = 2030 Mb
2017-09-20 17:34:42 Loading photos...
2017-09-20 17:34:45 loaded photos in 3.218 seconds
2017-09-20 17:34:45 Refining model...
2017-09-20 17:36:57 Iteration #1 out of 20
2017-09-20 17:36:58 Finished processing in 4106.3 sec (exit code 0)
2017-09-20 17:36:58 Error: out of memory (2) at line 179
-
Great stuff, a lot to look into!
I did a quick test with video import but could not get it to work. I am trying to import a mp4 file from a GoPro Hero 4 Black but I am getting an error message that reads "A media resource couldn't be resolved". I am unable to find any help on this issue in the helpfile.
Any sugestions?
-
Hello Kjellis85,
Which OS you are using and which codec is used by the video?
-
Even if user decimate mesh to a deep **it and will play with 50images, such heado-on solution will always have not enough memory. At least OpenMVS can use CPU and RAM. And 300+ images still possible to process on 64Gb RAM.
Currently refine mesh functionality is experimental feature in PhotoScan and is available on GPU only (CUDA / OpenCL), but we are planning to implement CPU support for this feature in the next updates.
As for the video memory consumption, it depends not on the number of images, but the image resolution and quality option selected in Refine Mesh dialog. Also the mesh size is important, but it's impact is considerably lower than the one related to the images.
-
Congrats on the new release.
Can you guys provide some basic rundown on the new features? For instance, is the Refine Mesh command doing procedural detail extraction from the source images (like color to normal software) or is there some logic involved that actually improves upon the original result?
-
Cool! 8) Impressive list of changes. Look forward to playing with it.
-
Hello Kjellis85,
Which OS you are using and which codec is used by the video?
i am running Win 10 and the video uses h.264 codec.
-
Even if user decimate mesh to a deep **it and will play with 50images, such heado-on solution will always have not enough memory. At least OpenMVS can use CPU and RAM. And 300+ images still possible to process on 64Gb RAM.
Currently refine mesh functionality is experimental feature in PhotoScan and is available on GPU only (CUDA / OpenCL), but we are planning to implement CPU support for this feature in the next updates.
As for the video memory consumption, it depends not on the number of images, but the image resolution and quality option selected in Refine Mesh dialog. Also the mesh size is important, but it's impact is considerably lower than the one related to the images.
Yes i know, i played with this method on OpenMVS, with exactly the same limitations. So CPU version also did not help too much. 64Gb RAM was not enough for 300+ 16Mpx images because all processes going in core.
Do you have any information regarding memory requirements in your implementation?
53x16Mpx images can be processed at High settings, not Ultra.
But what VRAM needed for example for 500x24Mpx on High? Or On lower?
-
Dear Alexey,
Is there any news about improve the speed process many people look for that.
Mohamed
-
Can PhotoScan 1.4.0 pre-release be installed along side PhotoScan 1.3.4?
I think in previous pre-releases it was possible by moving the stable release install folder to a different location.
-
First...Awesome features in the 1.4 Beta release! Imbedding image thumbnails into Camera boxes in Model view really helps visualize which cameras are misaligned!
I think maybe I found a couple of bugs?:
1.) I tried to import a manually created Markers file exported from Photoscan Pro 1.3X into Photoscan Pro 1.4 Beta. I got a pop-up error dialog saying "Missing Sensor ID"
2.) I tried to import a MP4 video file and got a popup dialog saying "A media resource couldn't be resolved"
-
Can PhotoScan 1.4.0 pre-release be installed along side PhotoScan 1.3.4?
I think in previous pre-releases it was possible by moving the stable release install folder to a different location.
Hello Bill,
If you are using Windows, then you need to copy /PhotoScan Pro/ folder to another location before installing 1.4, since Windows will remove the previous installation (like Vlad has mentioned on the first page).
-
I think maybe I found a couple of bugs?:
1.) I tried to import a manually created Markers file exported from Photoscan Pro 1.3X into Photoscan Pro 1.4 Beta. I got a pop-up error dialog saying "Missing Sensor ID"
2.) I tried to import a MP4 video file and got a popup dialog saying "A media resource couldn't be resolved"
Thank you for reporting.
We'll fix the XML compatibility for markers in the next update. As for the video import, please specify the OS you are using and the codec used in the video.
-
Currently refine mesh functionality is experimental feature in PhotoScan and is available on GPU only (CUDA / OpenCL), but we are planning to implement CPU support for this feature in the next updates.
As for the video memory consumption, it depends not on the number of images, but the image resolution and quality option selected in Refine Mesh dialog. Also the mesh size is important, but it's impact is considerably lower than the one related to the images.
Yes i know, i played with this method on OpenMVS, with exactly the same limitations. So CPU version also did not help too much. 64Gb RAM was not enough for 300+ 16Mpx images because all processes going in core.
I am pretty sure that OpenMVS loads all images to VRAM before starting processing, so its video memory consumption depends on the number of images.
-
Hi Alexey,
Is it possible to generate a custom PATH/TRACK file within PhotoScan (not just horizontal or vertical as listed in the presets) that can be used for animations/fly-through videos?
Would be nice if I could save a number of 'views/viewpoints' and then generate a PATH/TRACK file from it.
Additionally, my 3DConnexcion Mouse only allows me to PAN/MOVE the model/point cloud, but not ROTATE or ZOOM. In the photo view it does not do anything (would be nice if ZOOM & PAN would work here).
Last but not least, what happened to ENABLE COLOR CORRECTION when building a texture or orthomosaic? I couldn't find it.
Regards,
SAV
-
Hello SAV,
Currently camera tracks can be imported from .path file that can be created in Blender, or from .fbx file. PhotoScan itself currently doesn't have instruments of manual tracks creation.
Color correction has been moved to the separate procedure: Tools Menu -> Calibrate Colors. The results of color calibration procedure will be applied to the texture/orthomosaic generation stages.
As for the 3Dconnexion mouse, we'll check it shortly.
-
Hi Alexey,
Thanks for clarifying.
Is there a way to generate a *.path file based on viewpoints using PYTHON scripting? ;-)
Would be great to have the possibility to generate custom fly-through animations without having to use third party software (such as Blender) mid way to generate the path/track file.
Regarding color correction. What does the CALIBRATE WHITE BALANCE do? Shouldn't there be a 'dropper tool' like in Photoshop that would allow you to pick the proper colour for calibration?
One more thing. Is there a way to select the dense point cloud based on image count? This is already possible for the sparse point cloud (gradual selection tool).
Regards,
SAV
-
Hello SAV,
There's some limited access to the camera track via:
chunk.animation.track
I've checked 3Dconnexion mouse and haven't got issues with rotation and zooming (the latter is performed by "flying" higher or lower).
For color correction the "white balance" option means that each band will be calibrated individually.
-
Can PhotoScan 1.4.0 pre-release be installed along side PhotoScan 1.3.4?
I think in previous pre-releases it was possible by moving the stable release install folder to a different location.
Hello Bill,
If you are using Windows, then you need to copy /PhotoScan Pro/ folder to another location before installing 1.4, since Windows will remove the previous installation (like Vlad has mentioned on the first page).
Oops! I missed that.
Thanks Alexey
-
Interesting how far Photoscan users from care about mesh quality :)
Anything but not most impressive feature of AP 1,4.
Some example how photo-consistency refinement work in AP.
But this result still far away from what this method can do.
https://i.imgur.com/p9PDoHx.png
And this is Medium quality ;) that required 2Gb of VRAM for 53x16Mpx only images (as i understand used 3 or 4 times downsampled images)
-
This is great, and thanks to the Agisoft team for continued improvement.
However - there appears to still be no way to import a dense cloud and still no way to edit a dense cloud properly (eg: no ability to view only a small section of the cloud so as to prevent selections of parts you don't want to delete).
In the case of import, seems an easy thing to do because you already allow the mesh to be exported and imported.
Currently it makes refining the end result quite difficult. If you don't want to expend effort on dense cloud edit tools then please please please add the import ability. The cloud can then be edited in an external tool and imported back in.
-
Hello cbnewham,
still no way to edit a dense cloud properly (eg: no ability to view only a small section of the cloud so as to prevent selections of parts you don't want to delete).
If you are using PhotoScan Pro, please try the following:
- select a part of the dense cloud,
- choose "Filter by selection" option in the drop-down menu of Dense Cloud Classes button on the toolbar.
It should hide all points except the selection. When finished editing in the smaller volume you can reset filter to display hidden points.
-
Hy, when I start video importing, the application screen become as image attached.
And the importing don't stop at 100% but go on and continue importing the last frame till I manually terminate the application
OSX Sierra
on Mac Book Pro
with Iris integrated graphic
-
Hello Matt,
We are planning to publish an update of pre-release with some fixes based on the received feedback, hope it will also fix the video import issue on Mac OS that is affecting GUI.
-
Thanks Alexey, but I only have the standard version, so that's not a solution for me.
:(
-
Ok, understood. We'll see, if it would be possible to add similar functionality to Standard edition.
-
Thanks Alexey. That's be really great.
For photogrammetry of sculpture more capabilities in dealing with the dense cloud is really the only thing I miss in what is otherwise a fantastic product.
-
Hy, when I start video importing, the application screen become as image attached.
And the importing don't stop at 100% but go on and continue importing the last frame till I manually terminate the application
Hello Matt,
Can you please check build 5097? Links are updated in the first post.
-
When running "Refine Mesh" I keep getting this error
"Kernel failed: unspedified launch failure (4) at line 197"
If I try running it again without restarting AP I get this error
"CUDA_ERROR_LAUNC_FAILED (700) at line 115"
Any idea why I'm getting these errors?
-
Hello all,
I'm a professional version user and i'm experiencing serious problems of "out of memory" in mesh creation, even with <100 photos models. I'm working with a 64bit windows10 system, with intel i7 7700, 32 gb of ram, AMD Radeon R9 280 with 4gb of wram.
-
apintuc: as far as I'm aware and experience dictates, mesh creation is mostly dependant on number of dense cloud points. I have 32Gb of ram on Windows 10 and my limit is 50 million points, regardless of how many photos are involved (my latest project has 800, other projects as few as 80, but the limit is always 50m points).
-
I think maybe I found a couple of bugs?:
1.) I tried to import a manually created Markers file exported from Photoscan Pro 1.3X into Photoscan Pro 1.4 Beta. I got a pop-up error dialog saying "Missing Sensor ID"
2.) I tried to import a MP4 video file and got a popup dialog saying "A media resource couldn't be resolved"
Thank you for reporting.
We'll fix the XML compatibility for markers in the next update. As for the video import, please specify the OS you are using and the codec used in the video.
I also try to import MP4 video into PS and get same message message "A media resource couldn't be resolved" on Windows platform...
Please see attached screen
-
Im using 1.4 build 5097 and trying to import video (both mp4 and mov) but keep getting the error: A media resource couldnt be resolved".
-
apintuc: as far as I'm aware and experience dictates, mesh creation is mostly dependant on number of dense cloud points. I have 32Gb of ram on Windows 10 and my limit is 50 million points, regardless of how many photos are involved (my latest project has 800, other projects as few as 80, but the limit is always 50m points).
Dear cbnewham, in generale the number of photos makes the number of points...anyway, i'm talking about of max 10 millions point clouds (but even 5 millions) that cannot be calculated by the pc...
-
hi,
I typed in the coordinates of fiducial marks and the focal length but No message to measure the fid. marks on the photos....
-
Very excited to see support for 3D mouse. However when we use it, all the axes seem backwards compared to when we use it with other software. Is there some way to set this?
-
Hi, I've had speed processing problems with Photoscan versions above 1.3.0 (about double or more the processing time for the same projects), so I tested 1.4.0 with dissapointing results... About 35hr to produce a 4mil. point cloud on an i7 6700k with 32gb ram and an r7 radeon. I'm attaching the log https://drive.google.com/file/d/0BzGCwRIMzzYsM0ZXTTBfUHdZcTg/view?usp=sharing (https://drive.google.com/file/d/0BzGCwRIMzzYsM0ZXTTBfUHdZcTg/view?usp=sharing) in case something comes out
-
Hi !
Just came back from final orals, I would like to know if it's possible to install 1.4 pre-release without removing 1.3.4 from our computer ?
Regards
EDIT: you already answered, sorry
Hello Bill,
If you are using Windows, then you need to copy /PhotoScan Pro/ folder to another location before installing 1.4, since Windows will remove the previous installation (like Vlad has mentioned on the first page).
-
Copy your existing working folder to a safe place, install the new vers. and then you can replace the folders to get back to your original version
Hi !
Just came back from final orals, I would like to know if it's possible to install 1.4 pre-release without removing 1.3.4 from our computer ?
Regards
-
Can you guys add a keyboard shortcut for Gradual Selection, now that you moved it to a new tab it's even more frustrating to access it multiple times (i keep going to Edit tab instead) :)
-
As well as others, still not able to import video:
"A media resource couldn't be resolved"
Type of imported file: FullHD mp4 video.
Regards
-
Very excited to see support for 3D mouse. However when we use it, all the axes seem backwards compared to when we use it with other software. Is there some way to set this?
+1
The 3DCONNEXION settings appear to have no affect on how the devise works in PhotoScan.
(http://www.digital-mapping.net/forums/Photoscan/2017/PS140_3Dmouse.jpg)
-
After build mesh i try to put a Marker on the photo to set up the scale and the marker suppose to automatically set in all photos but nothing happen i test it in V1.3.5 and it worked fine.
Mohamed
-
+1
Same here.
I already mentioned the problems to Alexey, but he wasn't able to 'recreate' it. Not sure what the issue is.
Very excited to see support for 3D mouse. However when we use it, all the axes seem backwards compared to when we use it with other software. Is there some way to set this?
+1
The 3DCONNEXION settings appear to have no affect on how the devise works in PhotoScan.
(http://www.digital-mapping.net/forums/Photoscan/2017/PS140_3Dmouse.jpg)
-
Do you mean that the 3D mouse works in the same way both with checked and unchecked "reverse" option?
-
Hi, I've had speed processing problems with Photoscan versions above 1.3.0 (about double or more the processing time for the same projects), so I tested 1.4.0 with dissapointing results... About 35hr to produce a 4mil. point cloud on an i7 6700k with 32gb ram and an r7 radeon. I'm attaching the log https://drive.google.com/file/d/0BzGCwRIMzzYsM0ZXTTBfUHdZcTg/view?usp=sharing (https://drive.google.com/file/d/0BzGCwRIMzzYsM0ZXTTBfUHdZcTg/view?usp=sharing) in case something comes out
Do you have the processing log or at least Chunk info with the parameters and time spent from the previous version, where the processing time was adequate?
According to the log the depth filtering took very long time and it could be related to the excessive overlap, when most of the cameras are overlapping with a lot of neighbors.
-
Hi Alexey,
If I adjust the 3D mouse settings (any of them), nothing changes in PhotoScan. Haven't had this problem with other software so far.
Regards,
SAV
Do you mean that the 3D mouse works in the same way both with checked and unchecked "reverse" option?
-
"optimizecameras" seems to produce random results, and messes up the camera alignment even with all tags set to "false" so that it should not really do anything.
-
Hi, I've had speed processing problems with Photoscan versions above 1.3.0 (about double or more the processing time for the same projects), so I tested 1.4.0 with dissapointing results... About 35hr to produce a 4mil. point cloud on an i7 6700k with 32gb ram and an r7 radeon. I'm attaching the log https://drive.google.com/file/d/0BzGCwRIMzzYsM0ZXTTBfUHdZcTg/view?usp=sharing (https://drive.google.com/file/d/0BzGCwRIMzzYsM0ZXTTBfUHdZcTg/view?usp=sharing) in case something comes out
Do you have the processing log or at least Chunk info with the parameters and time spent from the previous version, where the processing time was adequate?
According to the log the depth filtering took very long time and it could be related to the excessive overlap, when most of the cameras are overlapping with a lot of neighbors.
I will provide this, since I can reprocess the job with 1.3.0. You're right there is probably much unneeded overlap in this project (it was an early one)
-
Hi, I've had speed processing problems with Photoscan versions above 1.3.0 (about double or more the processing time for the same projects), so I tested 1.4.0 with dissapointing results... About 35hr to produce a 4mil. point cloud on an i7 6700k with 32gb ram and an r7 radeon. I'm attaching the log https://drive.google.com/file/d/0BzGCwRIMzzYsM0ZXTTBfUHdZcTg/view?usp=sharing (https://drive.google.com/file/d/0BzGCwRIMzzYsM0ZXTTBfUHdZcTg/view?usp=sharing) in case something comes out
Do you have the processing log or at least Chunk info with the parameters and time spent from the previous version, where the processing time was adequate?
According to the log the depth filtering took very long time and it could be related to the excessive overlap, when most of the cameras are overlapping with a lot of neighbors.
I will provide this, since I can reprocess the job with 1.3.0. You're right there is probably much unneeded overlap in this project (it was an early one)
Hello cadm8,
In the version 1.3.0 the number of pairs for the depth filtering has a strict threshold: 50 pairs, in the later updates the limit has been removed.
-
"optimizecameras" seems to produce random results, and messes up the camera alignment even with all tags set to "false" so that it should not really do anything.
Hello stephan,
Can you please specify, the issue is related to the version 1.4.0, or you are observing the same behavior in the version 1.3? In the latter case I can assume that there's incorrect coordinate data input either for cameras or markers. Or the coordinate system has not been selected properly.
-
Hello cadm8,
In the version 1.3.0 the number of pairs for the depth filtering has a strict threshold: 50 pairs, in the later updates the limit has been removed.
Hi, that seems like a logical explanation! It's there a way to adjust the limit on the newer versions, or do you think there should be a change about this matter?
-
Hello cadm8,
In the version 1.3.0 the number of pairs for the depth filtering has a strict threshold: 50 pairs, in the later updates the limit has been removed.
Hi, that seems like a logical explanation! It's there a way to adjust the limit on the newer versions, or do you think there should be a change about this matter?
You can use the following line input to the Console pane to limit the number of pairs:
PhotoScan.app.settings.setValue('main/depth_filtering_limit', N)
Here instead of N you need to input some integer value, for example, 80. Hopefully, it would reduce the processing time considerably without any visible issues. I do not recommend to go under 50-60 pairs, though.
To return the value to the default (unlimited) value, please use "-1":
PhotoScan.app.settings.setValue('main/depth_filtering_limit', -1)
-
Appreciate it and I will try it immediately! Is there a more specific reason that the previous 50 pair limit got lifted? I mean, can selecting more pairs lead to visibly greater results but only in certain cases, like say small object modeling?
-
Appreciate it and I will try it immediately! Is there a more specific reason that the previous 50 pair limit got lifted? I mean, can selecting more pairs lead to visibly greater results but only in certain cases, like say small object modeling?
For some projects, like building scanning from the copter (also with a lot of overlap) the limit caused the dense cloud reconstruction artifacts along the building corners.
-
Wow, just read the release notes and there's some really neat changes in this one. Can't wait to try them out. :D
-
Appreciate it and I will try it immediately! Is there a more specific reason that the previous 50 pair limit got lifted? I mean, can selecting more pairs lead to visibly greater results but only in certain cases, like say small object modeling?
For some projects, like building scanning from the copter (also with a lot of overlap) the limit caused the dense cloud reconstruction artifacts along the building corners.
Hi Alexey, I've testet it, it has an effect, but it's not as quick as 1.3.0 when selecting 50 pairs for example. I believe it brings down processing time, but not as much
-
Hi Alexey,
If I adjust the 3D mouse settings (any of them), nothing changes in PhotoScan. Haven't had this problem with other software so far.
Regards,
SAV
Do you mean that the 3D mouse works in the same way both with checked and unchecked "reverse" option?
As stated, any (and all) changes to the 3DCONNEXION 3D mouse settings have no affect to how the device functions in PhotoScan. Also most of the 3D mouse functions in PhotoScan are reversed to what I'm accustomed to. For example move the puck left object pans to the right.
-
Hi Alexey, really loving the new update.
I'm getting the "A media resource couldn't be resolved" issue with the vids from my Samsung NX500. It outputs to MP4s with H.265 rather than the more common H.264. This is on 64 bit windows 10.
If it makes a difference, I'm also able to view the vids in various media players and extract frames using DVDVideoSoft's free tool.
If you want a test file to check out, I've uploaded a copy of a ~1 second video of the checkerboard calibration screen from Lens here:
https://www.uploadthingy.com/widget?c=worldviz.com&el=1783660.mp4
-
"optimizecameras" seems to produce random results, and messes up the camera alignment even with all tags set to "false" so that it should not really do anything.
Hello stephan,
Can you please specify, the issue is related to the version 1.4.0, or you are observing the same behavior in the version 1.3? In the latter case I can assume that there's incorrect coordinate data input either for cameras or markers. Or the coordinate system has not been selected properly.
Yep sorry about that, the coordinate system was messed up.
-
Using 1.4.0 build 5097 on an iMac 5k. The camera icons and the thumbnails with view images are quite small. I have tried toggling High DPI mode but nothing changes.
Tom
-
Hi, I've had speed processing problems with Photoscan versions above 1.3.0 (about double or more the processing time for the same projects), so I tested 1.4.0 with dissapointing results... About 35hr to produce a 4mil. point cloud on an i7 6700k with 32gb ram and an r7 radeon. I'm attaching the log https://drive.google.com/file/d/0BzGCwRIMzzYsM0ZXTTBfUHdZcTg/view?usp=sharing (https://drive.google.com/file/d/0BzGCwRIMzzYsM0ZXTTBfUHdZcTg/view?usp=sharing) in case something comes out
Do you have the processing log or at least Chunk info with the parameters and time spent from the previous version, where the processing time was adequate?
According to the log the depth filtering took very long time and it could be related to the excessive overlap, when most of the cameras are overlapping with a lot of neighbors.
I will provide this, since I can reprocess the job with 1.3.0. You're right there is probably much unneeded overlap in this project (it was an early one)
I'm attaching the log from 1.3.0. Still a big difference in processing times (about 2-3 more time needed)
-
Appreciate it and I will try it immediately! Is there a more specific reason that the previous 50 pair limit got lifted? I mean, can selecting more pairs lead to visibly greater results but only in certain cases, like say small object modeling?
For some projects, like building scanning from the copter (also with a lot of overlap) the limit caused the dense cloud reconstruction artifacts along the building corners.
Can anyone please confirm whether or not this will reduce the True Ortho artefacts near buildings/objects? It seems like this should. Longering the processing time but increasing the quality is definitely interesting.
-
Appreciate it and I will try it immediately! Is there a more specific reason that the previous 50 pair limit got lifted? I mean, can selecting more pairs lead to visibly greater results but only in certain cases, like say small object modeling?
For some projects, like building scanning from the copter (also with a lot of overlap) the limit caused the dense cloud reconstruction artifacts along the building corners.
Can anyone please confirm whether or not this will reduce the True Ortho artefacts near buildings/objects? It seems like this should. Longering the processing time but increasing the quality is definitely interesting.
Is this new in this Version 1.4, or was the number of pairs lifted in the current stable Version 1.3.4?
-
Using 1.4.0 build 5097 on an iMac 5k. The camera icons and the thumbnails with view images are quite small. I have tried toggling High DPI mode but nothing changes.
Hello Tom,
You can increase the size of the placeholders with the mouse wheel while holding Shift key.
-
I would like more information on the scanned film with fiducial marker support.
I can't find anything else on this.
hi,
I typed in the coordinates of fiducial marks and the focal length but No message to measure the fid. marks on the photos....
-
hi,
Alexey answered the question, see in General section > fiducial marks?.
I posted twice because I did not know how to delete a message ! :D
-
Hi Alexey, really loving the new update.
I'm getting the "A media resource couldn't be resolved" issue with the vids from my Samsung NX500. It outputs to MP4s with H.265 rather than the more common H.264. This is on 64 bit windows 10.
If it makes a difference, I'm also able to view the vids in various media players and extract frames using DVDVideoSoft's free tool.
If you want a test file to check out, I've uploaded a copy of a ~1 second video of the checkerboard calibration screen from Lens here:
https://www.uploadthingy.com/widget?c=worldviz.com&el=1783660.mp4
Thank you for the update and new features! I have the exact same problem. It occurs with all video types. I'm using win10, standard 64 bit version. thanks
-
Masked portions of images are being applied to my textures!
Something is wrong, because masked portions of the image should not be used in texture generation.
-
Hello Jeremiah,
Thanks for reporting. We'll check that and hopefully fix in the next pre-release update.
-
Hi
Great work on new release as always
I am having a bug where I edit masks then save the project but when I goto edit the next next mask i get an error of Cant read project masks.zip
Also I get a long load time on some masks that I havent worked on yet in project.
It may have something to do with I moved the project to a portable drive. I initially worked on workstation. Could be different drive letter issue or something.
Upon reloading the project it works again fine but I can repeat error every time
-
I have been playing with the new Model View <Camera><Show Images> feature. Objects I work with tend to be symmetric. The new feature shows the image on both side of the camera; when looking through the camera as well as when you are looking into the camera. It should only display the image when looking through the camera at the object, otherwise you can't tell which side of a symmetric object you took the picture from!
-
In MacOS, while using a 6 Chunk project and batch processing, both dense cloud and model are erased from each chunk. Only Tie Points, DEM and Orthomosaic are retained.
-
Hi,
I have been playing around with the new version and there is a bug where you can't set a custom resolution for the mesh generation. The field is disabled when using the arbitrary 3D mode. The Ultra High setting doesn't give enough detail to generate a sharp normal map from the data.
But I had an idea and hope this is possible to add as a new feature the 1.4.
Is it possible to bake the dense pointcloud height information into the textures alpha channel? So it would trace from the generated mesh to the dense point cloud and record the difference in distance as a grey scale texture into the alpha of the color map or a separate set of textures.
This way we could use the lower res meshes and displace them later to get back the full detail. This would make the handling of detailed scans a lot easier since you don't need to work with massive 40 million or more poly meshes.
What you think?
cheers,
Willi Hammes
CEO
MAWI United GmbH
-
In MacOS, while using a 6 Chunk project and batch processing, both dense cloud and model are erased from each chunk. Only Tie Points, DEM and Orthomosaic are retained.
Hello Renato,
Can you please specify, which batch steps have been used? Maybe you can share batch XML, if it is still accessible?
-
Is it a new thing in 1.4 that full size images are not cached, and it has to reload the full image each time you page up/page down to cycle through them, seeing a pixelated thumbnail while you wait?
I'm working with 10s of terabytes of images which are held on a network file server.
The images are almost completely featureless white painted surfaces so the best way to check overlap used to be to visually identify tiny defects in the paint finish and flip to the next image and back quickly multiple times to see if they exist in adjacent images or not.
This is impossible to do now the image disappears, reappears pixelated, and then takes a couple of seconds to load in full each time you switch.
(Win 10 Home, PS Pro 1.4 build 5097 64 bit)
-
Hi Alexey
Possibly, I can not recover the XML but I remember perfectly which steps were used because I use the same workflow in these types of work. The steps were:
Align Photos - Accuracy Highest - Default in all others
Optimize Alignement - defaults
Build Dense Cloud - Quality Ultra High; Depth filtering Moderate - Default in all others
Build Mesh - Surface Type Height field; Face Count High - Default in all others
Build Dem - Defaults
Build Orthomosaic - Defaults (here the color correction present in version 1.3.x is missing)
The "Apply" option in each step was All Chunks
In the end, both dense cloud and mesh were erased from each chunk. I did not monitor the project ongoing so I cannot specify the step in which this data was erased.
Cheers
-
Hello Renato,
Is it possible that there were Align Photos or Optimize Cameras task somewhere after Build Dense Cloud and Build Mesh jobs? Maybe you have the processing log saved after this batch?
I'll try to reproduce it on Mac OS X shortly, but on Windows the batch works as expected.
-
I'll try to reproduce the problem in a smaller project using the same conditions. Considering that things failed, I've trashed the files and I'm now processing the same project in 1.3.4.
-
I have been playing around with the new version and there is a bug where you can't set a custom resolution for the mesh generation. The field is disabled when using the arbitrary 3D mode. The Ultra High setting doesn't give enough detail to generate a sharp normal map from the data.
Hello Willi,
Do you mean the new (experimental) mesh reconstruction option?
The polygon count is selected automatically for it currently. To improve the quality of the mesh you can use Refine Mesh option after the polygonal model has been reconstructed. Mesh refinement would likely increase the polygon count to improve the level of detail on the mesh surface.
-
Hi Alexey,
yes noticed that too after looking through the menus.
This indeed helped a lot with the detail, a lot cleaner as well, compared to 1.3s output.
I tried to use it on older scans made with 1.3 but it keeps crashing on all of the meshes.
I guess the base meshes are way to dense to also run the refinement tool on them.
Will do some more tests with new data in 1.4.
But the idea of having the relative displacement stored in a baked texture would still be a cool feature to have.
Best regards
Willi Hammes
-
Im using 1.4 at the moment, and when trying to "Build Orthomosaic" with a particular image set, it fails with "TiffReadTile: unexpected error". This probably isnt a 1.4 specific problem, but could not find any posts about this on the forum, so figured it might be a new thing? Any clue what would cause this error? Thanks!
Console says:
2017-10-16 14:13:47 boundaries extracted in 0.053 sec
2017-10-16 14:13:47 region overflow: reading [1071, -12194] - [1080, -12185] from 17x13 image
2017-10-16 14:13:47 libtiff error: 1024: Col out of range, max 16
2017-10-16 14:13:47 region overflow: reading [1071, -12194] - [1080, -12185] from 18x19 image
2017-10-16 14:13:47 libtiff error: 1024: Col out of range, max 17
2017-10-16 14:13:47 Finished processing in 70.297 sec (exit code 0)
2017-10-16 14:13:47 Error: TIFFReadTile: unexpected error
-
Hello school,
Do the images work without any issues in 1.3.4 and if there are any similar error messages when aligning, building dense cloud in 1.4?
If so, can you send please any sample source image to support@agisoft.com that we can use for testing purposes?
-
Hi Alexey,
There was nothing surprising with aligning and dense cloud building in 1.4. Let me switch over to 1.3.4 and see what happens.
The only thing I did differently in this image set compared to the last several image sets I processed in 1.4 was that I used masks.
I will send the images to you if there is no problem in 1.3.4.
Thanks!
-
Hi,
Everything worked well for me in addition to exporting the report, I do not get any pictures in the report.
Regards,
Luis
-
Hello Luis,
Does the report generation of the same project works fine in 1.3.4?
Usually missing images in report are related to the referencing problems - either it is incorrect, or some cameras with wrong coordinates are present (like 0,0,0).
-
I have been playing with the new Model View <Camera><Show Images> feature. Objects I work with tend to be symmetric. The new feature shows the image on both side of the camera; when looking through the camera as well as when you are looking into the camera. It should only display the image when looking through the camera at the object, otherwise you can't tell which side of a symmetric object you took the picture from!
I would also like to seem indication of which side of the thumbnail is towards the object.
In addition, it would also be helpful to have an indication of camera groups. Perhaps a different border color for each workspace camera group?
Tom
-
Hi Agisoft team, the new features look quite interesting and useful, can you please tell me some approximate date when we can expect stable version of this release for download? Maybe you already wrote somewhere but I couldn't find.
Thanks
-
Hi All,
Am I blind or did the extremely useful "Gradual selection" option just vanish in the latest release?
Cheers,
Geert
-
Hi All,
Am I blind or did the extremely useful "Gradual selection" option just vanish in the latest release?
Cheers,
Geert
They moved it to another tab (Model) I think.
I will repeat my humble request to add a keyboard shortcut for this option in here :)
-
Hi Agisoft team, the new features look quite interesting and useful, can you please tell me some approximate date when we can expect stable version of this release for download? Maybe you already wrote somewhere but I couldn't find.
Hello ivan-zd,
I think that the release of 1.4.0 may be expected in the upcoming winter, hopefully in the end of December.
-
Hello wojtek,
In the next update of 1.4.0 (build 5251) that should be available in a few minutes from the first post of the thread you would be able to find the customizeable shortcuts for the Main Menu items (General tab of PhotoScan Preferences window).
Please check the links - version is updated.
-
For photogrammetry of sculpture more capabilities in dealing with the dense cloud is really the only thing I miss in what is otherwise a fantastic product.
We have added Filter By Selection option for the dense cloud points view in Standard edition (1.4.0 build 5251), so you can check this one now.
-
Hi Alexey
This version crashes immediately under MacOS. Turned off Gatekeeper and it did not solve the problem.
Console says:
default 16:21:09.766925 +0100 kernel AGC:: [PhotoScanPro pid 8570 mux-aware] exiting, non-mux-aware app count 1, runtime: 0:00:06.176
default 16:21:09.767497 +0100 kernel AGC:: [PhotoScanPro pid 8570 non-mux-aware] exiting, non-mux-aware app count 0, runtime: 0:00:00.792
default 16:21:09.767530 +0100 kernel AGC:: IGD (usr IGD extdisp 0 WS-ready 1 capture 0 non-mux-aware app 0 ac 0)
default 16:21:09.767549 +0100 kernel AGC:: Switching from PEG to IGD
default 16:21:09.784420 +0100 CommCenter #I handleLSNotitifcation_sync: Application exited: <private>
default 16:21:09.789524 +0100 launchservicesd UNIX error exception: 22
default 16:21:09.792344 +0100 launchservicesd MacOS error: -67062
default 16:21:09.795081 +0100 launchservicesd CHECKIN:0x0-0x632632 8571 org.mozilla.crashreporter
RH
-
Hello Renato,
Looks strange, I've just checked it on Sierra, will try on High Sierra then.
-
Hi Alexey
I'm using High Sierra, yes. I've used the build-in feature to send the crash report to Agisoft. I don't know if it works or not.
The previous beta was working as expected. I told you that I've had a "disappearing information" error but I could not reproduce it again. I guess that was an isolated misbehavior.
Cheers
RH
-
Hi Alexey,
I have a question regarding the MicaSense RedEdge camera radiometric calibration.
I have loaded photos, chosen to create multispectral, created manually the Calibration images folder and loaded the calibration panel images, masked them, but I am not sure that Photoscan is actually using the calibration images.
I have read the EXIF data and from the MicaSense I have got the reflectance information about the panel for every band.
I used that reflectance information in Calibrate Sensitivity setting as albedo (perhaps I made mistake here), after that sensitivity is automatically set in Camera Calibration/Bands and for every band, it is above 1, while black levels are set to 0.
The results I am getting are bad. After entering the albedo, all images look like overexposed, and the model is white. :-\
Without radiometric calibration, it creates a good model.
Any suggestions?
-
Does the new version support radiometric calibration using the Parrot sequoia irradiance sensor?
Thanks
-
Airphoto processing results from Camera Mica Sense in Natural Color. Visible non-balance areas so they can affect Visual and digital analysis
Question:
Can the results of agisoft processing other than radiometric correction, can produce also in the value of reflectan not digital number as it has been produced now
-
Has anyone been able to succesfully generate an animation of a pointcloud using Blender to design the camera path? I am absolutely knew to blender and have no clue where to start off. I reckon as a minimum one should import the pointcloud/ mesh to start off with for navigation purposes, but even that hasn't worked out thus far tho!
-
Hello Nenad,
The "black level" value indicates the value of the pixel for the corresponding band that represent "true black" for this band.
-
Hi, Alexey,
Is there more control in Move/Scale/Rotate option (i.e to integrate a value to X,Y,Z)
Another thing when i try to put a marker on the sparse point cloud it took long time (3-4 min) to put the marker unlike the previous version 1.3.4 which was fast in this step. BTW its happen with bif amount of Spars pointcloud (i.e up to 5 mil)
Best,
Mohammed
-
Hey, I have a question about camera calibration from within Photoscan Standard 64-bit, build 5251.
I wanted to try out the newly merged-in camera calibration, so I loaded up the chessboard from Tools -> Lens -> Show Chessboard, took some shots, ran Tools -> Lens -> Calibrate Lens and ended up with some aligned photos in the model view (attached).
Where do I go from here? I'm familiar with the original method with Lens as a separate program, but I don't see anywhere to save out camera profiles. Is there a new workflow to follow for that part?
-
Version Mismatch in Network Mode
I installed 1.4.0 5251
But if I use Build 5251 node to connect to the same computer's server , it will show version mismatch.
At the same time, the computer that is running "Server" will accept "nodes" that are running Build 5097.
i.e.
This is a bug that Build 5251 Server accepts nodes running Build 5097 only but not accept nodes running 5251.
-
Hello rexsham,
Version mismatch means is related to the nodes version. Only same version nodes may connect to the same server.
-
Hello Nenad,
The "black level" value indicates the value of the pixel for the corresponding band that represent "true black" for this band.
OK, I have read the EXIF data, and I have entered the black level from meta, without any difference in processing output. How do I know that Photoscan is actually using the calibration images?
-
Hello Nenad,
You can reset the sensitivity calibration and perform the calibration again, then copy and provide the Console pane output.
-
Hello,
I want to report a big problem with exporting google map tiles and MB tiles in the last version. I see notice "Invalid raster transform value". My data are in EPSG 5514. Everything works fine in the version 1.3.4. Data are transformed correctly into web mercator and then exported in the 1.3.4. See the attachment.
Second problem is with orthorectification. Please see the second attachment - there is some problme with reading rasters. Whole project with the images you can find here: https://www.dropbox.com/s/1l3at7943xfzr0r/project.zip?dl=0
Best regards,
Jakub
-
Hello rexsham,
Version mismatch means is related to the nodes version. Only same version nodes may connect to the same server.
Let me rearrange my statement again.
I am using a 5251 build node to connect to 5251 build server in the SAME computer.
It does not work. It shows Version mismatch.
BUT, I can connect a 5097 node (which is another computer on the network) to a 5251 server, this is strange.
Have a closer look at the image attached.
This is the computer with IP 192.168.1.161
192.168.1.161 is the computer running server 5251 build
192.168.1.166 is the computer on network running node 5097 (IT CAN REGISTER, which it should not)
When I try to invoke a node with 5251 build on 192.168.1.161 on the same computer, it failed.
-
Hello rexsham,
Can you check with the Network Monitor if there are any other nodes (that are on version 5097) that are connected to your server?
-
Hi Alexey
In case the crash reporter under MacOS High Sierra is not able to send you a crash file, I've extracted the following line that can give you a clue about the "crash on startup" problem:
File "/Users/xxxxxxxx/Library/Application Support/Agisoft/PhotoScan Pro/scripts/ortoscrip.py", line 11, in <module>
from PySide import QtCore, QtGui
ImportError: No module named 'PySide'
Cheers
-
Hello Renato,
It seems that you have outdated autorun scripts (for version 1.2?) that are not compatible with the version 1.4. If you remove them (move to any other location) the application should start normally.
-
I want to report a big problem with exporting google map tiles and MB tiles in the last version. I see notice "Invalid raster transform value". My data are in EPSG 5514. Everything works fine in the version 1.3.4. Data are transformed correctly into web mercator and then exported in the 1.3.4. See the attachment.
Second problem is with orthorectification. Please see the second attachment - there is some problme with reading rasters. Whole project with the images you can find here: https://www.dropbox.com/s/1l3at7943xfzr0r/project.zip?dl=0
Hello Jakub,
It seems that there's a bug for the orthomosaic export that shows "Invalid Raster Transform Value" error. It is not related to the coordinate transformation. We'll fix it in the next version update.
Unfortunately, I do not see any libtiff errors with the provided images, if you can sen any single image that raises the warning in the Console pane, it would be helpful.
-
Hi Alexey
But those possible 1.2 autorun scripts are located where? I've just replaced the new 1.4 beta for the old one. Shouldn't they work in the same manner? I guess that this new 1.4 beta has something different from the latest one that triggers this behavior. The latest 1.4 beta (5097) works as expected.
-
Hello Renato,
Can you check the path that appears in the error?
"/Users/xxxxxxxx/Library/Application Support/Agisoft/PhotoScan Pro/scripts/ortoscrip.py"
-
Hi Alexey
I was so busy that did not think in the obvious :-)
Done. Erased the script and it works.
Thanks a lot
Cheers
-
Hello rexsham,
Can you check with the Network Monitor if there are any other nodes (that are on version 5097) that are connected to your server?
There are no other nodes that are on version 5097 in the network.
In the image that I attached in the previous post is running the Network Monitor. There are only two computers in the network, one running 5097 as the node, one running 5251 as the server. You can see the 192.168.1.166 has been connected to the 5097 server. And the node running on 192.168.1.161 (the server that is running 5251 itself cannot connect to itself)
-
Hello rexsham,
On your screenshot (see cropped image below) there are two node instances connected to the server, both nodes are running version 5097, server is running 5251. Both instances are run on host 192.168.1.166. While the server is run on 192.168.1.161 and from the same host (.161) you are trying to start updated node.
So if any other node is trying to connect and doesn't have proper version installed (exactly the same as already connected nodes), you'll get version mismatch issue.
-
It seems that there's a bug for the orthomosaic export that shows "Invalid Raster Transform Value" error. It is not related to the coordinate transformation. We'll fix it in the next version update.
Unfortunately, I do not see any libtiff errors with the provided images, if you can sen any single image that raises the warning in the Console pane, it would be helpful.
Hello Alexey: I've had the same "invalid Raster Transform Value" error, very inconvenient in this very moment.
I have noticed that the file size shown in the tree does not match what I was expecting which is double as the dem file. (the dem can be exported flawlessly)
-
Hello JMR,
The size of the orthomosaic may be smaller than double of DEM (from high quality dense cloud), since DEM may be extrapolated, whereas orthomosaic covers only visible areas.
As for the "Invalid Raster Transform Value" - it's a bug with any orthomosaic export in the given build. Before we have uploaded the new update, you can use previous 1.4.0 build for export.
-
Hello,
Looks like Photoscan 1.4 has some great features ! I'm disappointed the "remove lighting" tool is only available in the Pro version, but still, nice improvements.
Like other users, I get the "A media resource couldn't be resolved" error when importing a video.
Also, before clicking "ok", the preview does not seem to be working (it's just a black rectangle).
Tested using Photoscan 1.4.0 build 5251 64 bits, Windows 10, Video : .MP4 / Sony's XAVC-S codec)
I can provide you a video sample, if needed.
I also just noticed a small translation error in the French version (I don't know if it was there before 1.4) :
File > Import > Import model is translated as "Fichier>Importer>Exporter le modèle" : this should be changed to "Importer le modèle".
-
Hello rexsham,
On your screenshot (see cropped image below) there are two node instances connected to the server, both nodes are running version 5097, server is running 5251. Both instances are run on host 192.168.1.166. While the server is run on 192.168.1.161 and from the same host (.161) you are trying to start updated node.
So if any other node is trying to connect and doesn't have proper version installed (exactly the same as already connected nodes), you'll get version mismatch issue.
Thank you very much Alexey.
After your explanation I finally understand that "Version mismatch" does not mean the Node and Server version mismatch, it actually means that more than 1 node that are running different versions of node is connected to server at the same time.
I would kindly suggest that changing the "Version Mismatch" into "Nodes running different versions are not allowed, please make sure all the nodes are running the same Version and Build of Photoscan."
The original message "Version Mismatch" is a bit confusing and I think that means it is a server and node version mismatch.
Anyway thank you so much! 1.4 has a lot of great improvements. Make sure the new features are well documented in the user guide and your website tutorials. The tutorials on the web are severely outdated.
-
Very excited about the new features, most of all the de-lighting function which I would not have access to. Hope you reconsider not including it in the base version, as it's exactly what game artists / asset-creation needs (I suspect more than the average pro user), and not including it just pushes us more into Unity workflows.
Anyway, congrats and looking forward to upgrade!
-
Hey, I have a question about camera calibration from within Photoscan Standard 64-bit, build 5251.
I wanted to try out the newly merged-in camera calibration, so I loaded up the chessboard from Tools -> Lens -> Show Chessboard, took some shots, ran Tools -> Lens -> Calibrate Lens and ended up with some aligned photos in the model view (attached).
Where do I go from here? I'm familiar with the original method with Lens as a separate program, but I don't see anywhere to save out camera profiles. Is there a new workflow to follow for that part?
Hi Smallpoly, future Smallpoly here. It looks like you did something wrong the first time because I tried things again and didn't have aligned photos showing up in the model view. Maybe you hit align photos at some point and forgot about it? Running the calibrations should show the results in the console and apply them to the "Adjusted" tab in Camera Calibration. From there you can save it out to XML as in the original Lens program.
-
Can I know what is this problem?
Build 5251
-
I really need to know a fix for "invalid raster transform" when doing orthoimage!!!!!
Could you tell me some way of workaround or are you goin to fix it today?
My customer really needs a solution today....
-
Hello GeoGecco,
I can share the link to the previous pre-release build for you to export the orthomosaic. The public pre-release update is expected by the end of this week.
-
Hello rexsham,
libtiff warning is indicating that something is incorrect with the meta-data, but it seems that it doesn't affect the processing.
You can send the image example to support@agisoft.com, so that we can check which tag is causing the warning.
-
Hello Alex
I installed the 1.3.4 now and I'm able to export the orthophotos.
-
Hello,
I have question about raster transformation. I use EPSG 5514 for my project. If I want to export MBtiles there must be a conversion into EPSG 3857. But by default there is only 3-parameters transformation which has 6 m precesion in the 5514 prj file. Another transformation from 5514 is 7-parameters transformation which has 1 m precision. If I add a new coordinate system with this transformation nothing happend. I have again bad result with low precision at the end. Is it possible to change it somewhere? I know that I am able to define new projection in the "doc" file in the orthomosaic folder (project files folder).
Best regards,
Jakub
-
I got problem in exporting using latest build.
-
Please have a look at the log.
I processed a project till the DOM stage.
But when I try to export the DEM, it finishes within 1 second and nothing has been output.
If I create DSM using sparse cloud it can output, but when I used dense cloud to generate the DSM, it cannot output.
It cannot output even when I selected to cut the output into 10000x10000 size tiles.
-
Hello rexsham,
"Invalid raster transform value" is a bug of the current pre-release build version. It has been already reported and will be fixed in the next update, by the end of the week.
-
Hi Alexey,
Could you please share the link to the previous build? jleon@usc.edu.au
cheers,
Javier
-
related to my topic for bad transformation:
Agisoft uses 3 parameters transformation by default (15965) - http://epsg.io/5514
Much better transformation is 7-parameters transformation (1623) - http://epsg.io/5514-1623
Best regards,
Jakub
-
Hello Jakub,
When you select your reference coordinate system EPSG:5514 in the Reference settings dialog, there is a button to the right of the coordinate system selection field related to the Datum Transformation Settings.
Clicking on it allows to select one of the transformation to WGS84 or use custom values.
-
Pre-release has been updated. Orthomosaic export issue should be fixed now.
-
Hello Alexey Pasumansky
build 5251 export to Smart3D with tie points have a problem
1、xml miss child tag <Name> in <SRS>
2、 Definition the coordinate system (like epsg: 2383),in Smart3D it‘s Wrong result。
setp by setp :
(1) AT worked with epsg 2383
(2)Coordinate system changed to local(m)
(3)Export to xml
(4) Change the xml file,add <Name> and <Definition> tag, this case DOT NOT USE "local", you needed USE "epsg: 2383 ".
like;
<SRS>
<Id>1</Id>
<Name>Xian 1980 .......... </Name>
<Definition>EPSG: 2383</Definition>
</SRS>
The result is correct With Smart3D 4.4.7。
3、Large Project(6400000 tie points /14000 images) memory overflow, export xml failure。
-
Hy Alexey
i'm trying the new usefull feature "import points" in build 5310.
import points works well but when i try to create a mesh Agisoft crash.
I attached the log file
Best regards
Andrea
-
Hi Alexey,
Thanks for update i start try it and i notice three things for pointcloud importer:
1- When i import pointclouds the top view become bottom view so when i press shortcut 7 it give me the bottom view (Z axis in opposite direction )also for shortcut 3 and 4 (give me bottom Right and Bottom Left).
2- Why in import pointcloud i can't import it directly in the sw (i.e i should open an any old project to get this option activate).
3- when i try to build a mesh from the importer pointcloud (2 million points) it took long long time to process it and stop in 19% by the way i reduced the pointcloud to 400k and also stop in 19% see snapshot 1.
-Also another problem happen while i build a Cylinder Orthomosiac to a cylinder column it stop process and give me this message: see snapshot 2________Edit the Cylinder OrthoMosiac: i saved the file in another HDD and got this interesting result keep going Agisoft team. See snapshot 3
Mohammed
-
Hello Alexey
I am testing the new function "import points" in build 5310. and so far everything works excellent, even really large pointcloud, the only thing that does not work is to generate tiled model from the imported points
-
Hello Alexey,
There is some mismatch in "Dense cloud" tool (build 5310). If you click on "Classify Ground points" you will get "Select points by color", if you click on "Select points by color" you will get "Select points by mask", etc...
Another questions:
1) May I skip Align step If I have exterior orientation parameters and camera calibration parameters? I can import it and I would like to continue with dense cloud. Is it possible?
2) What about color correction? Where is this checkbox now?
Best regards,
Jakub
-
Ooh, a feature for preventing ghosting? That sounds awesome! They've been a cause of huge trouble anytime I do interiors. I'll need to try that one out too. This update keeps on getting better and better.
-
Hello,
I just tested version 1.4, the new features are great.
I have already done a test to mix data LIDAR and photos from a video everything worked. Awesome!
I'm testing a new, larger dataset and at the import stage of the video I get the error message "Failed to load media".
My video is in .mp4 and is a little over 2GB.
Is this a file size problem?
Thank you in advance for your answer.
-
How is one able to select/ turn on GPUs in v1.4?
I found my Dense cloud generation step took over four times as long in v1.4 compared to 1.3.4. Upon closer inspection I found that none of my GPUs show up in the Preference dialog box. They are not just unchecked, they do not show up all together. They did in the latest installment of 1.3.4. Any fix around this?
-
How is one able to select/ turn on GPUs in v1.4?
I found my Dense cloud generation step took over four times as long in v1.4 compared to 1.3.4. Upon closer inspection I found that none of my GPUs show up in the Preference dialog box. They are not just unchecked, they do not show up all together. They did in the latest installment of 1.3.4. Any fix around this?
Hello BobvdMeij,
Which OS you are using and which graphic card is installed on your system?
-
There is some mismatch in "Dense cloud" tool (build 5310). If you click on "Classify Ground points" you will get "Select points by color", if you click on "Select points by color" you will get "Select points by mask", etc...
Hello Jakub,
Thanks for letting us know, it seems that for some reason the options were mixed. We'll fix it in the next update.
-
Another questions:
1) May I skip Align step If I have exterior orientation parameters and camera calibration parameters? I can import it and I would like to continue with dense cloud. Is it possible?
2) What about color correction? Where is this checkbox now?
Hello Jakub,
1) It is possible using Import Cameras option - it loads the exterior and interior orientation parameters that are not refined then. However, you still need to run image matching process (Build Points) to identify the overlapping images and find the correspondences between them.
2) Color correction has been moved to the separate isolated function: Tools Menu -> Calibrate Colors option. It should be performed prior to the orthomosaic or texture generation process.
-
Hello Alexey,please.
http://www.agisoft.com/forum/index.php?topic=7730.msg37970#msg37970
-
Started Testing Import of PontClouds. (.ply)
Seems to work fine. Also the Mesh Generation.
:)
-
How is one able to select/ turn on GPUs in v1.4?
I found my Dense cloud generation step took over four times as long in v1.4 compared to 1.3.4. Upon closer inspection I found that none of my GPUs show up in the Preference dialog box. They are not just unchecked, they do not show up all together. They did in the latest installment of 1.3.4. Any fix around this?
Hello BobvdMeij,
Which OS you are using and which graphic card is installed on your system?
Hello Alexey,
I'm running Windows 64-bit and Agisoft Photogrammetric Kit for TOPCON.
I also just learnt that Rolling Shutter compensation is not available when Optimizing Camera's. Hence I rolled-back to 1.3.4, which is working like a charm thus far.
-
Hello BobvdMeij,
I can suggest to re-install graphic card drivers then, using the installer from the GPU manufacturer's web-site.
The rolling shutter option in 1.4 has been moved to the camera calibration group preferences in the Tools Menu -> Camera Calibration window.
-
Hello Alexey,
Build Tiled Model not work from the imported points "Dense cloud" ....... Error: Zero dense cloud resolution
Best regards
-
Build Tiled Model not work from the imported points "Dense cloud" ....... Error: Zero dense cloud resolution
Hi Diego !
Building Tiled model includes building a texture, which is made from images.
I guess you won't be able to create a tiled model without aligned set of pictures.
Regards
-
Building Tiled model includes building a texture, which is made from images.
thanks for your comment, of course I have a set of images aligned, this is very obvious, this error only happens to me with the new functionality of importing points.
-
Well, sometimes basic reminder are needed ;D
Is this problem happened before of after the parameters window (attached image) ?
What is the initial pixel size value when the window appear ?
-
Tiled model generation requires point normals to be present in the cloud.
-
Tiled model generation requires point normals to be present in the cloud.
Hello Alexey,
It is possible that the software supposes turn normal Upwards for lidar aerial, Something like a 2.5D
Generate normals for a pointcloud in PLY format and it does not work,
Message with file *.las without normals...... Error: Missing point normals
Message with file *.ply witht normals...........Error: Empty surface
Even with the normals it does not work, Build Mesh Arbitrary (3D) and Build Tiled Model for imported points
Best regards,
-
The latest beta will fail while exporting the Orthomosaic if, after accepting the export dialog, the chosen format is JPEG instead of TIF.
Cheers
-
Hi Alexey,
While i press save file it give me this error?! what i did wrong???
Please fix it.
-
The same Ortho export, with the same settings, if exported with version 1.4 takes ages to render into a GIS (QGIS for instance) while the version 1.3.4 tif output renders instantaneously. Is this difference due to the generation of TIFF overviews?
Cheers
-
Hey dear team,
I need to know where you put the "gradual selection" in the pre-release 1.4.0?
Thank you ;-)
-
Hello GeoGecco,
It should be in Model menu (menu related to the Model view frame functionality).
-
Hi Alexey,
While i press save file it give me this error?! what i did wrong???
Please fix it.
Please answer my Q
-
Hello tornado2011,
I think that something might have happened to the thumbnails archive in the project files structure. So you can try to use Remove Thumbnails option in the Workspace pane context menu for the active chunk, then wait until the thumbnails are reloaded in the Photos pane and try to save the project again.
-
Hi Alexey,
I have a question regarding the MicaSense RedEdge camera radiometric calibration.
I have loaded photos, chosen to create multispectral, created manually the Calibration images folder and loaded the calibration panel images, masked them, but I am not sure that Photoscan is actually using the calibration images.
I have read the EXIF data and from the MicaSense I have got the reflectance information about the panel for every band.
I used that reflectance information in Calibrate Sensitivity setting as albedo (perhaps I made mistake here), after that sensitivity is automatically set in Camera Calibration/Bands and for every band, it is above 1, while black levels are set to 0.
The results I am getting are bad. After entering the albedo, all images look like overexposed, and the model is white. :-\
Without radiometric calibration, it creates a good model.
Any suggestions?
I, too, am trying to figure out how to use the calibration panel images and would love some help.
Is it possible to get a quick list of steps?
I added a set of 4 images of the calibration panel to an existing project generated with a sequoia. They are automatically added to a "calibration images" folder in the cameras folder in the workspace pane. I added them as a
"multispectral camera from files as band".
I masked out everything except the panel in the image.
I run "calibrate sensitivity" and can see in the "camera calibration" window, under the "bands" tab that sensitivity for the green channel has changed, "normalize band sensitivity" has been checked, and the other bands remain the same (0 and 1 for Black level and sensitivity). Doing anything further gives wonky results.
What should I be doing differently??
Thanks!
-
Hello school,
I'm attaching the brief instruction.
Please note that you need to apply masks for each band of every calibration image before calibrating the sensitivity.
-
The latest beta will fail while exporting the Orthomosaic if, after accepting the export dialog, the chosen format is JPEG instead of TIF.
The same Ortho export, with the same settings, if exported with version 1.4 takes ages to render into a GIS (QGIS for instance) while the version 1.3.4 tif output renders instantaneously. Is this difference due to the generation of TIFF overviews?
Hello Renato,
Can you provide the processing log related to the failed export and specify the orthomosaic size? Also is it the common RGB dataset or multispectral project?
Also the logs related to the exports from 1.3 and 1.4 would be helpful, additionally please specify the file size for both export results and whether there are any problems with the QGIS import if you disable the TIFF overviews during the export?
-
Classify Ground Points... brings up the Select by Color dialog.
(http://www.digital-mapping.net/forums/Photoscan/2017/classify_ground.jpg)
-
Alexey,
Thanks very much! I was very close but was missing just one thing:
Note: You need to apply masks for each calibration image and for each band! To switch between bands use “Set Master Channel” in the context menu after right-clicking on the chunk's label in the Workspace pane
Now I have sensitivity being adjusted for each band and am reprocessing.
Thanks!
-
Hello school,
In the next pre-release update we'll slightly update the procedure, it should allow for automatic detection of the MicaSense Calibration panel.
-
Great. Thanks!
-
Hi Alexey
Concerning the exportation of the orthomosaic as a JPEG image, the problem might have been related with the reuse of a project processed in version 1.3.4- The same project processed from ground up in version 1.4.0 has no problems. The image will export as a JPEG without a problem. Relative to the QGIS dealing with tiff images exported from PS, Tiff overviews do matter. If they are not generated the image is almost impossible to visualize in QGIS. It tiff overviews are rendered, the image will render just fine and quickly, despite of it's size. For this reason, if the objective is to use the ortho in a GIS environment, it's recommended to activate the Tiff Overviews option while exporting.
Cheers
-
I see a new feature called "Draw patch" but cannot find anything in the helpfile about this. Can anyone explain what this does and how it differs from draw polygon?
-
Hello Kjellis85,
Draw Patch is just more convenient tool to Draw Polygon and Assign Image. It automatically assigns the most appropriate image for the drawn polygon.
-
Our company antivirus software crashed my project and also Photoscan 1.4.0 which I downloaded last night.
Previous versions of Photoscan which have done the same I think have not had any issues. I guess you should talk to the antivirus-companies and get on their "white-list"
-
Hey, can I get some more detail on what the "Use Strict Volumetric Masks" is doing with the new mesh workflow?
-
Hey, can I get some more detail on what the "Use Strict Volumetric Masks" is doing with the new mesh workflow?
Hello Smallpoly,
We'll post an introduction to this feature to the Blog section in a couple of days. Basically the feature allows to use only a few masked photos to cover the problematic/complex areas and use the new mesh generation method, instead of masking all the images.
-
Our company antivirus software crashed my project and also Photoscan 1.4.0 which I downloaded last night.
Previous versions of Photoscan which have done the same I think have not had any issues. I guess you should talk to the antivirus-companies and get on their "white-list"
Hello Erik,
Does it happen only with the certain pre-release build or you have observed it before with any 1.3.4 release versions?
Also please specify, which antivirus you are using.
-
Pre-release version of Agisoft PhotoScan 1.4.0 is available for download.
This pre-release is considered as unstable and is not recommended for production use.
Please use the links below for download:
Agisoft PhotoScan Standard edition
Windows 64 bit (http://download.agisoft.com/photoscan_1_4_0_x64.msi)
Windows 32 bit (http://download.agisoft.com/photoscan_1_4_0_x86.msi)
Mac OS X (http://download.agisoft.com/photoscan_1_4_0.dmg)
Linux 64 bit (http://download.agisoft.com/photoscan_1_4_0_amd64.tar.gz)
Agisoft PhotoScan Professional edition
Windows 64 bit (http://download.agisoft.com/photoscan-pro_1_4_0_x64.msi)
Windows 32 bit (http://download.agisoft.com/photoscan-pro_1_4_0_x86.msi)
Mac OS X (http://download.agisoft.com/photoscan-pro_1_4_0.dmg)
Linux 64 bit (http://download.agisoft.com/photoscan-pro_1_4_0_amd64.tar.gz)
PhotoScan 1.4.0 Change Log (http://www.agisoft.com/pdf/photoscan_changelog.pdf)
PhotoScan 1.4.0 Python API (http://www.agisoft.com/pdf/photoscan_python_api_1_4_0.pdf)
Please let us know if you have any problems using this version.
Please post problems and/or questions concerning other PhotoScan version in separate topics.
This topic will be removed when the final 1.4.0 version is released.
Note
Project files created with 1.4.0 version can't be opened with earlier versions.
EDIT:
Updated to version 1.4.0 build 5310.
Hello, about the Cesium 3D Tiles export, I think we are now burdened with writing code for the base app so it will be great if the contents of the zip file created is in one folder and another folder is created for the html and others that point to the tiles in the other folder. Thank you
-
>Does it happen only with the certain pre-release build or you have observed it before with any 1.3.4 release versions?
>Also please specify, which antivirus you are using.
No, I have never seen this before on Photoscan, but it has happened on other programs.
The antivirus is OfficeScan from Trend Micro as shown in the printscreen.
https://www.trendmicro.com/en_gb/business.html
Erik
-
Hello, Alexey:
Can you please ellaborate a bit what does mean "incremental alignment support" and how to use it?
Thanks
-
Hello JMR,
To use incremental image alignment it is necessary to keep the key points in the project (there's a new setting on the Advanced Preferences tab for that). Once the option is set up all projects where the image matching has been completed would also contain the key points.
So after running Align Photos on the certain set of images it is possible to add new photos to the same chunk and run Align Photos operation again (with the option "Reset Current Alignment" unchecked), in this case the processing wouldn't start from scratch, but instead PhotoScan will try to match the newly added images with the already aligned image set, basing on the stored key points.
-
Hi guys,
I have tested yours pre-release Agisoft PhotoScan version which has capability to export cesium 3D Tiles from Point-cloud and have faced one problem.
After i imported tiles to cesium viewer i can see that my point-cloud looks different by saying different i mean its looks like majority of points its gone and those who left don't contains information compare with point-cloud from how it looks like in PhotoScan.
I have tried to make medium and high quality point-clouds but result was the same.
Do you have any suggestion how can I receive correct output?
Best regards
Linas
-
During masks creation an apparent bug causes that if one saves the project, some masks created in previous session will disappear. if you open a photo that already had mask it seems not to be masked. Then if you exit and open again, you'll see it again... some other weird behaviors have been observed when one uses save command/button while creating several orthos under the same chunk, requiring the program to be re-started to let things right.
I wonder if you can reproduce this error as well.
regards,
GEOBIT
-
Hello JMR,
Which pre-release build you are using? I wasn't able to reproduce the issue with masking in both PSX and PSZ format.
-
I've sent a mail with detailed info.
Regards,
GEOBIT
-
Hello Alexey,
Error BuildOrthomosaic for version 1.4.0 build 5310
Error: TIFFReadTile: unexpected error for version 1.4.0 build 5310
Logs:
2017-11-24 13:56:36 Estimating resolution...
2017-11-24 13:56:37 Finished processing in 0.938 sec (exit code 1)
2017-11-24 13:56:45 Checking for missing images...
2017-11-24 13:56:45 checking for missing images... done in 0.29 sec
2017-11-24 13:56:45 Finished processing in 0.29 sec (exit code 1)
2017-11-24 13:56:45 BuildOrthomosaic: projection = Planar, surface = DEM, blending mode = Mosaic, pixel size = 0.5 x 0.5
2017-11-24 13:56:45 Analyzing DEM...
2017-11-24 13:56:45 estimating tile boundaries... done in 0.04 sec
2017-11-24 13:56:45 generating 4270x6640 orthophoto (7 levels, 0 resolution)
2017-11-24 13:56:45 selected 81 cameras
2017-11-24 13:56:45 Orthorectifying images...
2017-11-24 13:56:45 Orthorectifying 81 images
2017-11-24 13:57:20 Finished orthorectification in 34.522 sec
2017-11-24 13:57:20 selected 4 tiles
2017-11-24 13:57:20 selected 4 tiles
2017-11-24 13:57:20 Updating partition...
2017-11-24 13:57:22 25 of 55 processed in 2.02 sec
2017-11-24 13:57:23 16 of 46 processed in 0.887 sec
2017-11-24 13:57:23 6 of 39 processed in 0.04 sec
2017-11-24 13:57:23 4 of 31 processed in 0.021 sec
2017-11-24 13:57:23 partition updated in 3.588 sec
2017-11-24 13:57:23 selected 5 tiles
2017-11-24 13:57:23 Updating orthomosaic...
2017-11-24 13:57:24 loaded partition in 0.178 sec
2017-11-24 13:57:24 boundaries extracted in 0.054 sec
2017-11-24 13:57:24 can't parse geo key directory
2017-11-24 13:57:24 region overflow: reading [-951, -914] - [-947, -911] from 11x5 image
2017-11-24 13:57:24 libtiff error: 4294966272: Col out of range, max 10
2017-11-24 13:57:24 can't parse geo key directory
2017-11-24 13:57:24 region overflow: reading [-952, -906] - [-947, -905] from 11x10 image
2017-11-24 13:57:24 libtiff error: 4294966272: Col out of range, max 10
2017-11-24 13:57:24 can't parse geo key directory
2017-11-24 13:57:24 region overflow: reading [-956, -914] - [-947, -909] from 11x9 image
2017-11-24 13:57:24 libtiff error: 4294966272: Col out of range, max 10
2017-11-24 13:57:24 can't parse geo key directory
2017-11-24 13:57:24 region overflow: reading [-956, -914] - [-947, -910] from 11x7 image
2017-11-24 13:57:24 libtiff error: 4294966272: Col out of range, max 10
2017-11-24 13:57:24 can't parse geo key directory
2017-11-24 13:57:24 region overflow: reading [-956, -908] - [-947, -905] from 11x12 image
2017-11-24 13:57:24 libtiff error: 4294966272: Col out of range, max 10
2017-11-24 13:57:24 can't parse geo key directory
2017-11-24 13:57:24 region overflow: reading [-956, -907] - [-947, -905] from 11x11 image
2017-11-24 13:57:24 libtiff error: 4294966272: Col out of range, max 10
2017-11-24 13:57:24 can't parse geo key directory
2017-11-24 13:57:24 region overflow: reading [-956, -910] - [-947, -905] from 11x11 image
2017-11-24 13:57:24 libtiff error: 4294966272: Col out of range, max 10
2017-11-24 13:57:24 can't parse geo key directory
2017-11-24 13:57:24 region overflow: reading [-956, -912] - [-948, -907] from 11x10 image
2017-11-24 13:57:24 libtiff error: 4294966272: Col out of range, max 10
2017-11-24 13:57:24 Finished processing in 38.927 sec (exit code 0)
2017-11-24 13:57:24 Error: TIFFReadTile: unexpected error
-
Hello Diego,
Can you please send any sample image from the set? If you are working with the multiplane camera approach, then send all the images for one camera instance.
-
Hello Diego,
Can you please send any sample image from the set? If you are working with the multiplane camera approach, then send all the images for one camera instance.
Hello Alexey,
Already send example of images to your mail, thanks.
Best regards,
-
Listed in change log Version 1.4.0 build 5076
• Added support for multiple dense clouds/models in a single chunk.
How does this work? I have tried creating multiple dense clouds in a single chunk and merging chunks with dense clouds, both result in chunk with single dense cloud.
-
Does the Dense Cloud classification still work? Im not able to do it in auto mode at the prerelase 1.4
any suggestions?
-
Does the Dense Cloud classification still work? Im not able to do it in auto mode at the prerelase 1.4
any suggestions?
The menu selection for this is messed up. See this post:
http://www.agisoft.com/forum/index.php?topic=7730.msg38082#msg38082
You can get around it by using the option in Batch Process...
-
Where can I find "Export Matches" in this new version?
Thanks,
-
Where can I find "Export Matches" in this new version?
Hello zView,
Export Matches operation has been grouped with Export Cameras option now, so you can use now File Menu -> Export -> Export Cameras option and select the required file format (BINGO, PATB, ORIMA) that supports matching points export, then in the export dialog box check on "Tie points" option.
-
Alexey it seems that there is a bug there. Bingo format for matches does not provide any image coordinates only:
ITER itera.dat
IMCO image.dat
GEOI geoin.dat
END
-
Hello zView,
If you have checked on Tie Points option, then in the image.dat you'll get the coordinates of the matching points on the images.
-
Thanks, it works.
-
I have tested yours pre-release Agisoft PhotoScan version which has capability to export cesium 3D Tiles from Point-cloud and have faced one problem.
After i imported tiles to cesium viewer i can see that my point-cloud looks different by saying different i mean its looks like majority of points its gone and those who left don't contains information compare with point-cloud from how it looks like in PhotoScan.
I have tried to make medium and high quality point-clouds but result was the same.
Do you have any suggestion how can I receive correct output?
Hello Linas,
We have updated version 1.4.0 to build 5420 - there were some improvements and fixes for Cesium Tiles. Can you please check if the newly exported results do fit your needs now?
-
In the build 5420 it is possible to adjust the sensitivity of the 3D controller and also to invert the directions for it via Navigation tab in the Preferences window. Also this tab would be opened by pressing Menu button on the controller itself.
Another important fix is related to the dense cloud import. If the imported data is lacking normals, PhotoScan will try to generate them automatically.
-
In the build 5420 it is possible to adjust the sensitivity of the 3D controller and also to invert the directions for it via Navigation tab in the Preferences window. Also this tab would be opened by pressing Menu button on the controller itself.
Another important fix is related to the dense cloud import. If the imported data is lacking normals, PhotoScan will try to generate them automatically.
Being able to invert the directions on the Spacemouse made a huge difference for my inverted way of thinking. Much better, thanks! :)
-
Hi,
I have an issue with the fiducial feature in the newest and second newest builds. Everything works correctly except for markers that don't work whatsoever.
If I have a pointcloud and try to create a marker it creates a marker millions of meters away. It doesn't work either to create markers in Photo view (neither Add nor New). With known camera coordinates it works the same. I've tried it on several different projects with different images.
I would love to have this fixed as the feature makes it so much easier to work with old imagery!
-
I greatly look forward to when film with fiducials is part of a stable Agisoft release.
-
Hey, can I get some more detail on what the "Use Strict Volumetric Masks" is doing with the new mesh workflow?
Hello Smallpoly,
Please see the following tutorial, hope it is informative enough to get the idea about the strict volumetric masking:
http://www.agisoft.com/index.php?id=48
-
Hey, can I get some more detail on what the "Use Strict Volumetric Masks" is doing with the new mesh workflow?
Hello Smallpoly,
Please see the following tutorial, hope it is informative enough to get the idea about the strict volumetric masking:
http://www.agisoft.com/index.php?id=48
Cool! 8)
How about a tutorial for multiple dense clouds/models in a single chunk.
-
Please see the following tutorial...
Thanks for the tutorial, Alexey!
Will the two mesh construction modes always be offered as options (not talking about the strict volume mask but the two meshing modes)
It seems like they both have advantages and disadvantages, so I would vote for offering both.
I'd also add my voice to the ones who'd like to see delighting functionality in the standard version of the software. Right now the price difference between standard and professional is huge (I guess because the latter is used by institutions and for all kinds of important things) and there is zero chance I'll ever buy it, but built in delighting would be practical for use in games. Perhaps a slightly more expensive third option would make sense?
-
Hello.
Great new functionalities in this pre-release!
I'm trying out the new function to import point cloud. I have two LiDAR pointclouds I would like to import but get the following errors.
2017-11-29 12:17:00 ImportPoints
2017-11-29 12:17:00 Finished processing in 0 sec (exit code 0)
2017-11-29 12:17:00 Error: Unsupported datum transformation
2017-11-29 12:17:16 ImportPoints
2017-11-29 12:17:16 Finished processing in 0 sec (exit code 0)
2017-11-29 12:17:16 Error: Unsupported datum transformation
2017-11-29 12:19:23 unknown LASitem type 7
2017-11-29 12:19:26 ImportPoints
2017-11-29 12:19:26 Importing point cloud...
2017-11-29 12:19:26 unknown LASitem type 7
2017-11-29 12:19:26 Finished processing in 0 sec (exit code 0)
2017-11-29 12:19:26 Error: Can't import point cloud: E:/Prosjekter/Training phoenix/20170423-225543_miniRanger_A7R2/terrasolid/Training_Final_WGS84_UTM11N_Meters.laz
>>>
Do I need to apply some coordinate settings in Photoscan before importing?
-
Hello obw,
In which coordinate system in referenced current chunk and which one is used for the point cloud that is being imported?
-
"Show Images" on model view using Orthographic perspective (5 on the keyboard) doesn't work well.
-
I like to try new import point cloud feature.
How I should make my point clouds when using normals? Right now I'm trying to make ASC-file with normals and convert it to PTS. What I should use as separator(tabulation, comma, semicolon, space)? How about Decimal chart (point, comma)?
Now my PTS file looks like this (X,Y,Z, R, G, B, normals). Space as separator:
4653.735 -227.037 3.138 97 79 59 0.075 0.045 0.996
I tried calculating normals in Photoscan but it seems to take very long time. Never finished it.
-
Hello obw,
In which coordinate system in referenced current chunk and which one is used for the point cloud that is being imported?
Alexey,
In my last attempt I tried to set the chunk to the same Reference seting as the scan (.laz file) is: WGS84_UTM11N
This is the last errors, still did not work, see below and Attached images:
2017-11-29 15:08:04 unknown LASitem type 7
2017-11-29 15:08:33 ImportPoints
2017-11-29 15:08:33 Importing point cloud...
2017-11-29 15:08:33 unknown LASitem type 7
2017-11-29 15:08:33 Finished processing in 0 sec (exit code 0)
2017-11-29 15:08:33 Error: Can't import point cloud: E:/Prosjekter/Training phoenix/20170423-225543_miniRanger_A7R2/terrasolid/Training_Final_WGS84_UTM11N_Meters.laz
>>>
-
Move to other tab group not working in build 5432.
-
Move to other tab group not working in build 5432.
Hello bisenberger,
I cannot confirm it, yet. Which OS you are using? And whether there are any other issues when Photo view is opened?
-
Hello obw,
In which coordinate system in referenced current chunk and which one is used for the point cloud that is being imported?
In my last attempt I tried to set the chunk to the same Reference seting as the scan (.laz file) is: WGS84_UTM11N
This is the last errors, still did not work, see below and Attached images:
2017-11-29 15:08:04 unknown LASitem type 7
2017-11-29 15:08:33 ImportPoints
2017-11-29 15:08:33 Importing point cloud...
2017-11-29 15:08:33 unknown LASitem type 7
2017-11-29 15:08:33 Finished processing in 0 sec (exit code 0)
2017-11-29 15:08:33 Error: Can't import point cloud: E:/Prosjekter/Training phoenix/20170423-225543_miniRanger_A7R2/terrasolid/Training_Final_WGS84_UTM11N_Meters.laz
>>>
Hello obw,
It seems that for some reason the coordinate system is not recognized (and you cannot select the coordinate system manually). Can you provide the problematic point cloud file to support@agisoft.com and also specify, where from it comes?
-
How I should make my point clouds when using normals? Right now I'm trying to make ASC-file with normals and convert it to PTS. What I should use as separator(tabulation, comma, semicolon, space)? How about Decimal chart (point, comma)?
Now my PTS file looks like this (X,Y,Z, R, G, B, normals). Space as separator:
4653.735 -227.037 3.138 97 79 59 0.075 0.045 0.996
PTS format doesn't support point normals. You can consider using PLY (as plain-text format) that supports normals and also can be imported to PhotoScan.
-
I noticed a bug in the new Refine Mesh tool, where some compressions, or amount of bands, return an "Unsupported data type" error.
I used 8 bit B&W images with the color profile "Gray Gamma 2.2", and with what I think was ZIP compression (it doesn't say in the metadata). After converting them all to 8 bit 3 channel "Adobe RGB 1998 (1998)" with no compression, the tool worked fine. My suspicion is the ZIP compression, but I can't reliably say so since I'm not entirely sure what the compression was.
Either way, the tool works with that simple workaround!
-
Hello eriksh,
Currently in the pre-release Refine Mesh uses only RGB image data, but for the official release we are planning to support different types of data. If you can provide the project with the camera alignment and a few source images (in the unsupported format), please send the data to support@agisoft.com. To reduce the file size you can remove dense cloud and mesh from the project.
-
Move to other tab group not working in build 5432.
Hello bisenberger,
I cannot confirm it, yet. Which OS you are using? And whether there are any other issues when Photo view is opened?
I double click image in the Photos panel to open in tab. I right click the image tab and choose Move to Other Tab Group. I click on model tab and double click a different image, the image replaces the image in the other tab. I think I just figured out my issue, if I right click on another image in the Photos panel it has an option to Open in New Tab. :-[
Sorry Alexey
-
crash with 200W tie points,save as to smart3d xml files.
I have runing build 5432.
-
crash with 200W tie points,save as to smart3d xml files.
I have runing build 5432.
Hello ruyi7952,
If there's anything in the log related to the issues, can you please post it here or send to support@agisoft.com?
-
If there's anything in the log related to the issues, can you please post it here or send to support@agisoft.com?
Hello Alexey
I don't have any logs ,but send a demo project for your mailbox。
thanks you.
-
Hi ,
First time I post something!
I want to import point cloud from 3D scan into photoscan in a chunk, merge it with the point cloud from photo scan and the meshing the merged point cloud....
I am wondering if it s possible?
The.*. las import is ok but after, if I try to mesh the imported point cloud I have an error: empty region and if I try to merge the 2 points clouds together...nothing happen...
Any idea?
-
coordinate system?
Hi ,
First time I post something!
I want to import point cloud from 3D scan into photoscan in a chunk, merge it with the point cloud from photo scan and the meshing the merged point cloud....
I am wondering if it s possible?
The.*. las import is ok but after, if I try to mesh the imported point cloud I have an error: empty region and if I try to merge the 2 points clouds together...nothing happen...
Any idea?
-
It looks good, I'm looking forward to the stable version.
I especially like that you have listened to the create polygon/polyline problemacy.
I hope an update will cure our very unstable build (it is suddenly spending HUGE amounts of time on processing).
-
I hope an update will cure our very unstable build (it is suddenly spending HUGE amounts of time on processing).
This can be solved using hot glue in any spare USB sockets and not letting students near computers.
http://www.agisoft.com/forum/index.php?topic=8100.0 (http://www.agisoft.com/forum/index.php?topic=8100.0)
-
Hi SvaneSDU,
Despite you are processing on an external hard drive, recent version of PhotoScan have no depth filtering limit anymore. That's what cause dense cloud generation to be really longer.
Just look at previous answers on this topic :
Hello cadm8,
In the version 1.3.0 the number of pairs for the depth filtering has a strict threshold: 50 pairs, in the later updates the limit has been removed.
Hi, that seems like a logical explanation! It's there a way to adjust the limit on the newer versions, or do you think there should be a change about this matter?
You can use the following line input to the Console pane to limit the number of pairs:
PhotoScan.app.settings.setValue('main/depth_filtering_limit', N)
Here instead of N you need to input some integer value, for example, 80. Hopefully, it would reduce the processing time considerably without any visible issues. I do not recommend to go under 50-60 pairs, though.
To return the value to the default (unlimited) value, please use "-1":
PhotoScan.app.settings.setValue('main/depth_filtering_limit', -1)
-
the 2 clouds are in the same local coordinates system (scan point cloud was usedto define GCP in photoscan)
coordinate system?
Hi ,
First time I post something!
I want to import point cloud from 3D scan into photoscan in a chunk, merge it with the point cloud from photo scan and the meshing the merged point cloud....
I am wondering if it s possible?
The.*. las import is ok but after, if I try to mesh the imported point cloud I have an error: empty region and if I try to merge the 2 points clouds together...nothing happen...
Any idea?
-
Hello geopole,
Currently point cloud import in the 1.4.0 discards existing dense cloud in the chunk and loads the new one form the file.
However, we are considering the merging option on import for the later 1.4 updates.
-
Hi all,
I'm new to Photoscan and using Pro version 1.4.0 build 5430 for processing multispectral Micasense Rededge imagery.
My question pertains to the change "Added support for MicaSense radiometric calibration parameters".
Following some other comments, I've found the useful "multiband-calibration" guide provided by Alexey that details how to calibrate sensitivity using in situ calibration panels. But does the new version also allow users to correct for the irradiance in each image collected simultaneously with a downwelling light sensor and present in the exif data? There is a workaround in R, but was hoping this function may be newly available in version 1.4.0.
Thanks.
-
Hello cdoughty,
I suggest to update to the build 5432, as there were some fixes related to the multispectral image processing.
The meta-information related to the irradiance sensor data should be taken into consideration during the orthomosaic generation process, if "Normalize Band Sensitivity" option is checked on in the Tools Menu -> Camera Calibration dialog -> Bands tab (for each band). The option is automatically checked on if the Reflectance Calibration procedure is completed.
Note that latest build that I am referring to allows to detect the images with the calibration panel and mask them out automatically, so you can go to the Tools Menu -> Calibrate Reflectance dialog right after adding source images and calibration images to the same chunk.
-
Is it possible to install this pre-release version alongside 1.3.*?
If so, how?
Edit: Going back through this thread, seems you just have to re-name your editing PhotoScan directory before installing.
-
Is it possible to install this pre-release version alongside 1.3.*?
install 1.3 version first, then move the folder to another location, then install 1.4 to default location.
-
Hi all,version 5432 get error,how fix it?
2017-12-12 11:00:51 generating 101331 x 56272 raster in 1 x 1 tiles
2017-12-12 11:03:59 libtiff error: Maximum TIFF file size exceeded
2017-12-12 11:03:59 libtiff error: Maximum TIFF file size exceeded
2017-12-12 11:04:00 Finished processing in 188.516 sec (exit code 0)
2017-12-12 11:04:00 Error: TIFFWriteTile: unexpected error
thanks you
-
Could you please explain what the “refine mesh” command is doing?
Ie: how is it exactly changing the mesh?
-
Hello obw,
In which coordinate system in referenced current chunk and which one is used for the point cloud that is being imported?
Alexey,
In my last attempt I tried to set the chunk to the same Reference seting as the scan (.laz file) is: WGS84_UTM11N
This is the last errors, still did not work, see below and Attached images:
2017-11-29 15:08:04 unknown LASitem type 7
2017-11-29 15:08:33 ImportPoints
2017-11-29 15:08:33 Importing point cloud...
2017-11-29 15:08:33 unknown LASitem type 7
2017-11-29 15:08:33 Finished processing in 0 sec (exit code 0)
2017-11-29 15:08:33 Error: Can't import point cloud: E:/Prosjekter/Training phoenix/20170423-225543_miniRanger_A7R2/terrasolid/Training_Final_WGS84_UTM11N_Meters.laz
>>>
Hello obw,
The fix will be included in the next 1.4.0 update.
It seems that for some reason the provided LAZ file doesn't contain the information regarding the coordinate system.
Now it would be possible to select the coordinate system in the Import Points dialog (in case the chunk is georeferenced).
-
Hi all,version 5432 get error,how fix it?
2017-12-12 11:00:51 generating 101331 x 56272 raster in 1 x 1 tiles
2017-12-12 11:03:59 libtiff error: Maximum TIFF file size exceeded
2017-12-12 11:03:59 libtiff error: Maximum TIFF file size exceeded
2017-12-12 11:04:00 Finished processing in 188.516 sec (exit code 0)
2017-12-12 11:04:00 Error: TIFFWriteTile: unexpected error
Hello ruyi7952,
Can you provide the full log related to the operation and specify the type format of the input data (number of channels and bit depth)?
-
Dear Alexey,
This issue never happen before, i mad a project with different chunks (build spares/Dens) and while i put a marker it took all the rams i had (see snapshot) then the sw crash is it normal, its never happen before??
Edit it took most of my Rams while i put the first Marker only, So i disable Refine marker based image content and it works well.
Best
Mohammed
-
Hi Dmitry,
Release 1.4 looks great! The new "Import Points" feature has allowed me to test a workflow where I take a point cloud out of Photoscan into CloudCompare which provides great features for editing point clouds (I am working on underwater models which can get quite "noisy" and require some manual editing), and then import the clean point cloud back into PS in order to mesh and texture.
However, one particular model I am working on ended up with a chunk of points missing when I did the import, and this was the case for both PLY and LAS file formats.
Attached you can see the point cloud imported with the missing chunk (empty piece in the middle), and also shown in cloud compare completely whole
I'm thinking this could be a software issue?
Thanks!
-
Hello jinjamu,
It seems that the normals for the points in the middle are inverted. Can you try using Invert Normals option in the Tools menu for the selected points?
-
Hello Mohammed,
Can you provide the information about the chunk's contents: number of dense cloud points and number of polygons of the mesh model?
-
Hello Mohammed,
Can you provide the information about the chunk's contents: number of dense cloud points and number of polygons of the mesh model?
Thanks Alexey for ur answer,
So the Dens PC a pprox. 2-3 Million and the same for mesh, so while i disable the Refine markerin advance menu and open photo then put marker the sw work fine but while i enable this option and open photo then put marker it took all my rams. (it happen when i open photo then put marker).
look at the snapshot it explain eveything.
Mohammed
-
Hello jinjamu,
It seems that the normals for the points in the middle are inverted. Can you try using Invert Normals option in the Tools menu for the selected points?
Hi Alexey,
Indeed you are correct!
Is there an easy way to select the points that need inversion? Manually is not so easy.
Is this something that will be corrected in future releases?
Regards
John
-
Dear Alexey,
Can you figure out a solution to that problem!! i answer above about the problem issue.
Mohamed
-
Hello Mohamed,
We are checking this at the moment.
-
Hi Alexey,
Indeed you are correct!
Is there an easy way to select the points that need inversion? Manually is not so easy.
Is this something that will be corrected in future releases?
Hello John,
In the build 5543 you can select the points that includes the problematic area (no need to have an accurate selection) while looking from top, then in the Tools Menu select Dense Cloud -> Invert Point Normals option and in the related dialog check on "Opposite normals" option only. In this case only incorrectly oriented normals will be inverted.
-
For those who were using the threshold to limit the number of pairs for the depth filtering now there's also a possibility to limit the number of image pairs used for the depth maps calculation. The filtering parameter has changed also.
PhotoScan.app.settings.setValue('main/depth_max_neighbors', N)
PhotoScan.app.settings.setValue('main/dense_cloud_max_neighbors', N)
To reset the values to default (not limited) use "-1" value instead of integer N.
-
Alexey > are these thresholds exposed through the gui ?
-
Hello JRM,
Currently these parameters are not accessible directly from GUI, only via Console pane or by modifying the parameters in the system registry related to PhotoScan.
-
Hello Mohamed,
We are checking this at the moment.
Oky Thanks!
Mohamed
-
Hello,
Updated to the new pre-release version (build 5543) and got no result with this part of script.
It was working very well before.
Any idea to fix this issue?
Thanks in advance!
## Update View (frontal) with Hotkey "." ##
def Update_Frontview_Bbox():
#Update Frontview and Bbox in accordance of the CRS
chunk = PhotoScan.app.document.chunk
T = chunk.transform.matrix
viewpoint = PhotoScan.app.viewpoint
cx = viewpoint.width
cy = viewpoint.height
region = chunk.region
r_center = region.center
r_rotate = region.rot
r_size = region.size
r_vert = list()
for i in range(8): #bounding box corners
r_vert.append(PhotoScan.Vector([0.5 * r_size[0] * ((i & 2) - 1), r_size[1] * ((i & 1) - 0.5), 0.25 * r_size[2] * ((i & 4) - 2)]))
r_vert[i] = r_center + r_rotate * r_vert[i]
height = T.mulv(r_vert[1] - r_vert[0]).norm()
width = T.mulv(r_vert[2] - r_vert[0]).norm()
if width / cx > height /cy:
scale = cx / width
else:
scale = cy / height
PhotoScan.app.viewpoint.coo = T.mulp(chunk.region.center)
PhotoScan.app.viewpoint.mag = scale
ym = PhotoScan.Matrix([[1,0,0],[0,0,-1],[0,1,0]])
PhotoScan.app.viewpoint.rot = chunk.transform.rotation * r_rotate * ym
-
Hello Seboon,
Looks like a bug in the latest update, however, in the meantime while we are fixing that you can use the following code as a workaround:
## Update View (frontal) with Hotkey "." ##
def Update_Frontview_Bbox():
#Update Frontview and Bbox in accordance of the CRS
chunk = PhotoScan.app.document.chunk
T = chunk.transform.matrix
viewpoint = PhotoScan.app.viewpoint
cx = viewpoint.width
cy = viewpoint.height
region = chunk.region
r_center = region.center
r_rotate = region.rot
r_size = region.size
r_vert = list()
for i in range(8): #bounding box corners
r_vert.append(PhotoScan.Vector([0.5 * r_size[0] * ((i & 2) - 1), r_size[1] * ((i & 1) - 0.5), 0.25 * r_size[2] * ((i & 4) - 2)]))
r_vert[i] = r_center + r_rotate * r_vert[i]
height = T.mulv(r_vert[1] - r_vert[0]).norm()
width = T.mulv(r_vert[2] - r_vert[0]).norm()
if width / cx > height /cy:
scale = cx / width
else:
scale = cy / height
viewpoint.coo = T.mulp(chunk.region.center)
viewpoint.mag = scale
ym = PhotoScan.Matrix([[1,0,0],[0,0,-1],[0,1,0]])
viewpoint.rot = chunk.transform.rotation * r_rotate * ym
PhotoScan.app.viewpoint = viewpoint
-
Hello Alexey,
testing 5532 and Smart3D 4.4.7.68 get error messager:
Failed to create coordinate transformation from SRS
sir,please use "Local: non-georeferenced cartesian system" in SRS tag !!!
As with the previous version, the imported data is tilted by 90 degrees.
Please fix.
thanks you.
-
Hello Mohamed,
We are checking this at the moment.
Oky Thanks!
Mohamed
Hi Alexey about my problem, Is it going to be fix in the next issue or its my PC problem?
Best,
Mohamed
-
Hello Mohamed,
It seems that such behavior hasn't changed since the version 1.3.
Some fixes would be included in the next update, but I'm not sure it it would be completely fixed - currently the memory required for the Refine operation depends on the number of related images, i.e. overlap ratio for the area of interest.
-
Hello Mohamed,
It seems that such behavior hasn't changed since the version 1.3.
Some fixes would be included in the next update, but I'm not sure it it would be completely fixed - currently the memory required for the Refine operation depends on the number of related images, i.e. overlap ratio for the area of interest.
Hi Alexey
Thanks for you answer.
Oky i understand, I just wanted to let you know about this problem.
By the way even after i remove the marker ( with refine operation enable) the Rams wouldn't be reduced so i shall close and open the sw again to back to normal.
Thank you,
Mohamed
-
Hello Mohamed,
By the way even after i remove the marker ( with refine operation enable) the Rams wouldn't be reduced so i shall close and open the sw again to back to normal.
This would be fixed in the next update. Only a few photos will remain in cache, if needed for the further refinement procedure.
-
Hi,
Can I have the detail of the change in python API for buildDenseCloud()?
I found that we have to use buildDepthMaps() before, but I can't figure out exactly all the possible argument
Thanks
-
Hello BenjaminG,
The dense cloud generation stage has been split into two parts in the 1.4.0 API:
chunk.buildDepthMaps(quality = PhotoScan.LowQuality, filter = PhotoScan.AggressiveFiltering)
chunk.buildDenseCloud(point_colors = True)
cameras argument for the depth maps calculation function is optional and defines, for which cameras the depth should be estimated.
-
1.4.0 API (chunk.buildDepthMaps)
Hi Agisoft,
Please, put significant changes also in the python-api, otherwise you get something like
Agisoft PhotoScan Professional Version: 1.4.0 build 5532 (64 bit)
Platform: Mac OS
...
self.chunk.buildDenseCloud(quality=DCQuality,filter=DCFilter)
TypeError: 'quality' is an invalid keyword argument for this function
Error: 'quality' is an invalid keyword argument for this function
...
and you have no idea why (seems to be new since 1.4.0 build 5432)...
Thanks, Ruedi
-
Tab order in the 'Build Orthomosaic' dialog is very mixed up.
ps pro v1.4 64bit build 5532
-
The images are almost completely featureless white painted surfaces so the best way to check overlap used to be to visually identify tiny defects in the paint finish and flip to the next image and back quickly multiple times to see if they exist in adjacent images or not.
This is impossible to do now the image disappears, reappears pixelated, and then takes a couple of seconds to load in full each time you switch.
FYI I have found that opening the relevant images into multiple tabs pretty much solves this for me.
-
When exporting a sparse point cloud as ply, with Point colors selected, the RGB values are all 0
1.4.0 build 5532 macOS
Tom
-
Hello,
it seems with latest release, in Model View, the Filter Photos by Point is no longer available (greyed out) in context menu (right click).. see screen shot
-
When exporting a sparse point cloud as ply, with Point colors selected, the RGB values are all 0
Hello Tom,
Thank you for reporting.
Seems like all the colors were affected in the latest update, we'll fix it in the next version.
-
it seems with latest release, in Model View, the Filter Photos by Point is no longer available (greyed out) in context menu (right click).. see screen shot
Hello Paul,
Thank you for reporting. It would be also fixed in the next update.
-
Can you say how i can build mesh faster with new volumetric mask. All process very long? Are there working processor or video card? What performance are you recommend that new change work faster? Very often i have an error with message "Not enough memory".
-
Can you say how i can build mesh faster with new volumetric mask. All process very long? Are there working processor or video card? What performance are you recommend that new change work faster? Very often i have an error with message "Not enough memory".
Can you please provide the processing log related to the Build Mesh operation and specify the number of images in the project, their resolution, number of masks applied and system specs (CPU, GPU, RAM, OS version)?
-
Can you say how i can build mesh faster with new volumetric mask. All process very long? Are there working processor or video card? What performance are you recommend that new change work faster? Very often i have an error with message "Not enough memory".
Can you please provide the processing log related to the Build Mesh operation and specify the number of images in the project, their resolution, number of masks applied and system specs (CPU, GPU, RAM, OS version)?
I have 160 photo(3280*2464), 7 masks, mesh build with very high parameter and it still build more than 15 hours. RAM at maximum. Video card GTX 750. Windows server 2012. Screen shots added.
-
And one more when i use chunk.BuildPoints() there are no effect. No any point....
-
it seems with latest release, in Model View, the Filter Photos by Point is no longer available (greyed out) in context menu (right click).. see screen shot
When exporting a sparse point cloud as ply, with Point colors selected, the RGB values are all 0
Tab order in the 'Build Orthomosaic' dialog is very mixed up.
Should be fixed in the version 1.4.0 build 5585.
-
And one more when i use chunk.BuildPoints() there are no effect. No any point....
buildPoints() function assumes that you have imported exterior and interior orientation parameters for the cameras and have used matchPhotos() operation as well.
-
I have 160 photo(3280*2464), 7 masks, mesh build with very high parameter and it still build more than 15 hours. RAM at maximum. Video card GTX 750. Windows server 2012. Screen shots added.
More masks means that the process will be longer and higher memory will be required. Have you tried running the process without strict masks and see how long does it takes and what was the memory consumption peak?
-
Alexey,
it seems a lot of people have issues with the 1.3.0 dense cloud process. Many of us (I think) do not run drone images. Thus, why not make the dense cloud dialog window a parameter for "depth_filtering_limit"?
This way, those affected by a low value could keep it low, those (like me) fine with a low value of 50 or 80 could enter it right there. And the eternal bitching about slow dense cloud building would cease. ;)
-
And one more when i use chunk.BuildPoints() there are no effect. No any point....
buildPoints() function assumes that you have imported exterior and interior orientation parameters for the cameras and have used matchPhotos() operation as well.
I know, in version 1.3.4 all work good. But here there are no any point after buildPoints(). Its may code which work in version 1.3.4.
chunk.importCameras(locationpath, format=PhotoScan.CamerasFormatXML)
chunk.matchPhotos(accuracy=PhotoScan.HighAccuracy, generic_preselection=True, filter_mask=True, keypoint_limit=100000, tiepoint_limit=4000)
chunk.buildPoints()
-
Hi
Merge Tie point not included yet in Merging Chunk option in Batch process.
Thanks
Mohamed
-
Export Cameras in file exchange format is being cut off at exactly 2 GB. Is there a way to fix it?
-
I know, in version 1.3.4 all work good. But here there are no any point after buildPoints(). Its may code which work in version 1.3.4.
chunk.importCameras(locationpath, format=PhotoScan.CamerasFormatXML)
chunk.matchPhotos(accuracy=PhotoScan.HighAccuracy, generic_preselection=True, filter_mask=True, keypoint_limit=100000, tiepoint_limit=4000)
chunk.buildPoints()
I have checked it in the latest pre-release and it worked as expected, can you please make sure that cameras are properly imported (including the distortion information)?
-
Hello John,
In the build 5543 you can select the points that includes the problematic area (no need to have an accurate selection) while looking from top, then in the Tools Menu select Dense Cloud -> Invert Point Normals option and in the related dialog check on "Opposite normals" option only. In this case only incorrectly oriented normals will be inverted.
Works perfectly! Thanks very much.
-
The images are almost completely featureless white painted surfaces so the best way to check overlap used to be to visually identify tiny defects in the paint finish and flip to the next image and back quickly multiple times to see if they exist in adjacent images or not.
This is impossible to do now the image disappears, reappears pixelated, and then takes a couple of seconds to load in full each time you switch.
FYI I have found that opening the relevant images into multiple tabs pretty much solves this for me.
Hello James,
Have you tried to switch off multithread image loading in the Advanced Preferences tab? I think it should re-create the old behavior of the image loading for you.
-
Hello,
With the last update available, the option to hide items when capturing image from model view seems to have disappeared.
With API scripting, it also doesn't work.
Thanks if you can fix that in next release.
-
With the last update available, the option to hide items when capturing image from model view seems to have disappeared.
With API scripting, it also doesn't work.
Hello Seboon,
We'll fix the Python function in the next 1.4.0 update.
-
I believe I'm experiencing some kind of stability problem with v1.4 Build 5532 Standard Edition.
The symptoms are that a very large number of Handles (16M and 31M on two separate runs) are created over time and my machine pretty much comes to a halt. By coming to a halt I mean unresponsive to mouse or keyboard.
Only one of the 36 available cores (core 2) is doing anything, it is at 100%. all other cores are essentially idle.
It's an Intel i9 with 128Gb memory and twin EVGA GTX 1080 Ti SC Gaming 11 GPUs.
This happens many hours (10+) into the photo alignment stage of processing a 7200+ camera model.
On one occasion while running the above mentioned big model I got impatient and started up a 2nd copy of Photoscan to build a mesh using the new algorithm on a model that I had originally aligned using version 1.3.4. It was a smaller model (3200 cameras). I noted during this run that again only core 2 was at 100%. It got to a stage with 16,000,000+ handles that neither copy of Photoscan seem to be making any progress. I tried pausing one then the other but neither paused or made any progress (no output to the console). When the copy of Photoscan that had been running for 12+ hours was killed the handles reduced to 15,000 or so proving it was the long running copy of Photoscan that appears to be leaking Handles.
PS: Memory usage is fairly low (<25% of 128GB).
HTH
-
Hello bjb,
Do you have the processing log related to any of these operations? Which parameters you are using for the processing?
-
log file is too big.
-
log file is too big.
Maybe you can provide its final part that is related to the problematic operation only?
-
log file is too big.
Maybe you can provide its final part that is related to the problematic operation only?
Without documentation of the console log how can i know what is the problematic operation?
Log file from the start of the process is 17Mb. so it's hard to see how to cut it down to 512kb
-
Please see attached screen shot.
The Building Model stage is only using one core.
-
Please see attached screen shot.
The Building Model stage is only using one core.
15 hours later.
Is it hung? One core is 100%
-
Hello bjb,
Some steps of the Build Mesh stage are using only one core, so it is not the bug, however, the long time may be related to the large number of photos in the set and model complexity.
-
Hello bjb,
Some steps of the Build Mesh stage are using only one core, so it is not the bug, however, the long time may be related to the large number of photos in the set and model complexity.
Thanks Alexey
I'll wait a while longer then.
While not a bug it would be good if there was some indication that progress is being made otherwise it just looks like it has hung.
-
log file is too big.
Maybe you can provide its final part that is related to the problematic operation only?
Without documentation of the console log how can i know what is the problematic operation?
Log file from the start of the process is 17Mb. so it's hard to see how to cut it down to 512kb
I've cut out the repitative bits. Hopefully I've left in the bits that can help.
-
Error: bad allocation
When exporting large files and When generating dense point clouds,The program reported this error。
exporting
do export 600w tie point ,I have 128g memory, the program only uses 8g,get this error。You know, the previous version did not have this problem 。
In the previous version, exporting large files on a small memory machine caused a memory error,And larger files (more 630 tie point)can not be exported。
dense point clouds
I have 1400w tie point,dense point clouds with quality = Medium, depth filtering = Aggressive, used Network processing and 16G memory per computer,get error。
-
I believe I'm experiencing some kind of stability problem with v1.4 Build 5532 Standard Edition.
The symptoms are that a very large number of Handles (16M and 31M on two separate runs) are created over time and my machine pretty much comes to a halt. By coming to a halt I mean unresponsive to mouse or keyboard.
Only one of the 36 available cores (core 2) is doing anything, it is at 100%. all other cores are essentially idle.
It's an Intel i9 with 128Gb memory and twin EVGA GTX 1080 Ti SC Gaming 11 GPUs.
This happens many hours (10+) into the photo alignment stage of processing a 7200+ camera model.
On one occasion while running the above mentioned big model I got impatient and started up a 2nd copy of Photoscan to build a mesh using the new algorithm on a model that I had originally aligned using version 1.3.4. It was a smaller model (3200 cameras). I noted during this run that again only core 2 was at 100%. It got to a stage with 16,000,000+ handles that neither copy of Photoscan seem to be making any progress. I tried pausing one then the other but neither paused or made any progress (no output to the console). When the copy of Photoscan that had been running for 12+ hours was killed the handles reduced to 15,000 or so proving it was the long running copy of Photoscan that appears to be leaking Handles.
PS: Memory usage is fairly low (<25% of 128GB).
HTH
Hi Alexey
In the above quoted post I mentioned a problem with a large number of handles. I've repeated my test and found the source of the Handles leak to NOT be Photoscan. I hope that has not caused any confusion or unnecessary work on your part.
-
Hello bjb,
Some steps of the Build Mesh stage are using only one core, so it is not the bug, however, the long time may be related to the large number of photos in the set and model complexity.
Thanks Alexey
I'll wait a while longer then.
While not a bug it would be good if there was some indication that progress is being made otherwise it just looks like it has hung.
That's been 42+ hours since any progress has been noted. I just tried to "Pause" and as you can see in the attached screen capture Photoscan has not paused. It looks pretty hung up to me.
-
is it possible (if yes how) to have different versions of photoscan installed on the same PC and running independed from another
-
is it possible (if yes how) to have different versions of photoscan installed on the same PC and running independed from another
Yes you can by install it in other driver or folder.
-
I believe I'm experiencing some kind of stability problem with v1.4 Build 5532 Standard Edition.
The symptoms are that a very large number of Handles (16M and 31M on two separate runs) are created over time and my machine pretty much comes to a halt. By coming to a halt I mean unresponsive to mouse or keyboard.
Only one of the 36 available cores (core 2) is doing anything, it is at 100%. all other cores are essentially idle.
It's an Intel i9 with 128Gb memory and twin EVGA GTX 1080 Ti SC Gaming 11 GPUs.
This happens many hours (10+) into the photo alignment stage of processing a 7200+ camera model.
On one occasion while running the above mentioned big model I got impatient and started up a 2nd copy of Photoscan to build a mesh using the new algorithm on a model that I had originally aligned using version 1.3.4. It was a smaller model (3200 cameras). I noted during this run that again only core 2 was at 100%. It got to a stage with 16,000,000+ handles that neither copy of Photoscan seem to be making any progress. I tried pausing one then the other but neither paused or made any progress (no output to the console). When the copy of Photoscan that had been running for 12+ hours was killed the handles reduced to 15,000 or so proving it was the long running copy of Photoscan that appears to be leaking Handles.
PS: Memory usage is fairly low (<25% of 128GB).
HTH
Hi Alexey
In the above quoted post I mentioned a problem with a large number of handles. I've repeated my test and found the source of the Handles leak to NOT be Photoscan. I hope that has not caused any confusion or unnecessary work on your part.
Turns out there is a scenario where Photoscan appears to have a Handles leak after all. See attached screen grab showing Photoscan holding 14,520,358 Handles while 65% (11:07 hours) of the way through Align Photos stage. I've upgraded today to build 5650.
-
I believe I'm experiencing some kind of stability problem with v1.4 Build 5532 Standard Edition.
The symptoms are that a very large number of Handles (16M and 31M on two separate runs) are created over time and my machine pretty much comes to a halt. By coming to a halt I mean unresponsive to mouse or keyboard.
Only one of the 36 available cores (core 2) is doing anything, it is at 100%. all other cores are essentially idle.
It's an Intel i9 with 128Gb memory and twin EVGA GTX 1080 Ti SC Gaming 11 GPUs.
This happens many hours (10+) into the photo alignment stage of processing a 7200+ camera model.
On one occasion while running the above mentioned big model I got impatient and started up a 2nd copy of Photoscan to build a mesh using the new algorithm on a model that I had originally aligned using version 1.3.4. It was a smaller model (3200 cameras). I noted during this run that again only core 2 was at 100%. It got to a stage with 16,000,000+ handles that neither copy of Photoscan seem to be making any progress. I tried pausing one then the other but neither paused or made any progress (no output to the console). When the copy of Photoscan that had been running for 12+ hours was killed the handles reduced to 15,000 or so proving it was the long running copy of Photoscan that appears to be leaking Handles.
PS: Memory usage is fairly low (<25% of 128GB).
HTH
Hi Alexey
In the above quoted post I mentioned a problem with a large number of handles. I've repeated my test and found the source of the Handles leak to NOT be Photoscan. I hope that has not caused any confusion or unnecessary work on your part.
Turns out there is a scenario where Photoscan appears to have a Handles leak after all. See attached screen grab showing Photoscan holding 14,520,358 Handles while 65% (11:07 hours) of the way through Align Photos stage. I've upgraded today to build 5650.
Progress has stopped. See attached screen shot for Out of Memory error. Handles had reached 16,711,680 and only 2GB of memory is in use.
-
Hello obw,
In which coordinate system in referenced current chunk and which one is used for the point cloud that is being imported?
Alexey,
In my last attempt I tried to set the chunk to the same Reference seting as the scan (.laz file) is: WGS84_UTM11N
This is the last errors, still did not work, see below and Attached images:
2017-11-29 15:08:04 unknown LASitem type 7
2017-11-29 15:08:33 ImportPoints
2017-11-29 15:08:33 Importing point cloud...
2017-11-29 15:08:33 unknown LASitem type 7
2017-11-29 15:08:33 Finished processing in 0 sec (exit code 0)
2017-11-29 15:08:33 Error: Can't import point cloud: E:/Prosjekter/Training phoenix/20170423-225543_miniRanger_A7R2/terrasolid/Training_Final_WGS84_UTM11N_Meters.laz
>>>
Hello obw,
The fix will be included in the next 1.4.0 update.
It seems that for some reason the provided LAZ file doesn't contain the information regarding the coordinate system.
Now it would be possible to select the coordinate system in the Import Points dialog (in case the chunk is georeferenced).
Hello Alexey.
I just tried newest build 5650 and I finally managed to import the LAZ file that I provided earlier. Although, it was not possible to select coordinate system on import (not possible to select anything in the import coordinate dialog), and it is not possible to set the bounding box and proceed with any workflow.
When trying to build mesh (source dense cloud) I get the empty region warning. Tried to reset region without luck, tried to zoom out and find the region but can't find it. Do you have any suggestions?
EDIT: Found the bounding box and was able to proccess the data. Was difficult to find though, would be good if it was easier to reset the box when you import a pointcloud. Still can't choose a coordinatesystem when importing the cloud.
-
Hi Alexey,
I've noticed that there is a new SHOW IMAGES icon in the main tool bar. According to the manual, it 'Shows or hides stereographic image overlay'. What is it useful for and when would you use it?
Regards,
SAV
-
Hi Alexey,
With regards to parrot sequoia
1) The sensor registers both IMU angles and irradiance
It seems that this data is stored in the proprietary .dat files in the sensor. The camera exif data doesn't store these values.
It would make a big improvement to be able to access the information stored in the .dat file for processing in Agisoft. Will this feature be ever be accessible in Agisoft.
Thanks
-
Dear Alexey Pasumansky,
first of all thanks for you nice brief instruction for the multiband-calibration.
I made everything like you said but regretably every time (and I tried it 30 times) i got complety wrong values for the "black level" and the "sensitivity".
I checked everything, all panels are automatically regonized and also in every band, but everytime I only get black images as outcome. Do you know maybe a reason why? Maybe also some of the others can help me?
-
Hello Payne,
Which build of PhotoScan Pro you are using? The reflectance calibration has changed since some old pre-release builds in the latest 1.4.0 version.
-
Thanks for your fast reply! I'm using the complet New Version 1.4.0 build 5650. I also used the function to detect the images with the calibration panel and mask them out automatically and it works how you can see in my attached images, but regretably with wrong outputs in the "Camera Calibration Bands tab "
-
Hello Payne,
Can you please confirm that you are processing the data from scratch in build 5650 and are not using the old project saved in the previous builds?
-
Hello Alexey,
yes, I can confirm that. I'm not using the old project.
-
Hello Payne,
For me the sensitivity values in the Camera Calibration dialog do not seem correct.
Can you please make a quite test in the version 1.4.0 (built 5650) - create a new project, add a few images from RedEdge camera, including the calibration images and try to repeat the reflectance calibration operation. Then provide the screenshot of the Band tab from the Camera Calibration dialog before and after the reflectance calibration operation.
-
Hello Alexey,
First of all thanks for your stamina!
Yes, I'm also pretty sure that these values are not correct.
I did what you asked for. Attached you can find the examples. Regrettably the values doesn't change at all.
But the Normalize Band Sensitivity" option is checked on and this is normally only the case if the Reflectance Calibration procedure is complete, right?
I have tested the same with Parrot Sequoia data and it is the same issue.
Moreover I wrote a short tutorial for the multiband-calibration_1.4.0 (built 5650) (see attached). Is there something wrong?
Best wishes,
Norman
-
Hello Hello Payne,,
"Normalize band sensitivity" option is checked on automatically after the reflectance calibration procedure is finished.
Can you send any five images related to the same camera position to support@agisoft.com for additional investigation? Of course, if those five images also show the same Sensitivity value in the calibration dialog.
-
Im using 1.4 build 5097 and trying to import video (both mp4 and mov) but keep getting the error: A media resource couldnt be resolved".
I did not see any firm replies on the video importing issues, but for windows 64 bit (specifically in my case), using VLC media player to convert using standard .asf format works well for me.
Hope that helps. I know it is not one of the known formats shown in the import windows, but it does show and play in preview and imports just fine. Just only one thing to check is to make sure you have no shots that are solid black. Delete those. Most may be in the beginning of the processing so some of the first photos maybe.
-
Hello jdesignz023,
The issues with the video import are usually resoled by installing codec pack (like K-lite codec pack or similar).
-
Hello Alexey,
also after the newest Update Version 1.4.1 build 5925 (26 February 2018) after "Calibrate Reflectance" the images getting black and also the products like 3D point cloud as well as the Orthomosaik is only black.
I send the images to you, you can see it by yourself if you want.
Best wishes,
Norman
-
Hello Norman,
You can check the latest instruction for the reflectance calibration in the following tutorial:
http://www.agisoft.com/pdf/PS_1.4_(IL)_Refelctance_Calibration.pdf
If there are still any problems, please send a small sample dataset to support@agisoft.com for testing.
-
Hi Payne and Alexey,
I have been dealing with a similar problem (black or white images after reflectance calibration). Alexey may be aware of this because I've had a lengthy email chain with Agisoft support. I have just solved my problem, so maybe my solution will help you (PhotoScan 1.4.1 build 5925):
My precise workflow for successful calibration is as follows:
1. Add Photos
2. Set Tools -> camera calibration -> bands -> "Normalize Band Sensitivity' to TRUE (check the box).
3. Calibrate Reflectance (with sun sensor & panel).
If I instead do the following, one of my bands becomes pure white (bad result).
1. Add Photos
2. Calibrate Reflectance (with sun sensor & panel). (Normalize band sensitivity is now automatically checked.)
Hopefully this solves your problem, too!
-
Hello William,
Thank you for posting. It seems that in the latest version it's really necessary to set Normalize Band Sensitivity flag before starting the reflectance calibration procedure (another workaround, if just running the calibration procedure two times in a row, since the first time the flag will be set automatically). We'll fix that in the 1.4.2 update, so that it should work as expected.
-
I think maybe I found a couple of bugs?:
1.) I tried to import a manually created Markers file exported from Photoscan Pro 1.3X into Photoscan Pro 1.4 Beta. I got a pop-up error dialog saying "Missing Sensor ID"
2.) I tried to import a MP4 video file and got a popup dialog saying "A media resource couldn't be resolved"
Thank you for reporting.
We'll fix the XML compatibility for markers in the next update. As for the video import, please specify the OS you are using and the codec used in the video.
I'm also having the issue with video, OS is windows 10 codec is: Codec: H264 - MPEG-4 AVC (part 10) (avc1)
-
Hello ncassab,
On Windows it is usually required to install a codec (for example, use K-lite codec pack) to solve the problem.
-
... It seems that in the latest version it's really necessary to set Normalize Band Sensitivity flag before starting the reflectance calibration procedure (another workaround, if just running the calibration procedure two times in a row, since the first time the flag will be set automatically). We'll fix that in the 1.4.2 update, so that it should work as expected.
Hello Alexey,
First: No matter what I do but using Agisoft PhotoScan 1.4.1 to mosaic Sequoia data the orthomosaics always remain in DN (16 bit) values instead of reflectances. Also setting Normalize Band Sensitivity flag before reflectance calibration procedure lead to no success. Is it even possible by using the last updated version or should it work with the new release 1.4.2 only?
Second: I have no success to open sequoia_param.dat file using Notepad++ on Windows distribution. Any solution e.g. by another editor?
Thank you for supporting!
-
Hello Sören,
The output of reflectance calibration is in 16 bit format according to MicaSense recommendations:
https://support.micasense.com/hc/en-us/articles/215460518-What-are-the-units-of-the-Atlas-GeoTIFF-output-
value 65535 means 200% reflectance and 32768 - 100% reflectance, so if you need to get values from 0 to 1, it's necessary to use Raster Calculator dialog, create several output channels (according to the number of input channels) and use the ratio equation for each band, then during export use Index Value option in the Raster Transform section.
As for the problems that you are experiencing, can you provide a small sample image set with calibration plate images and the project file to support@agisoft.com?
-
Hello Alexey,
I am trying to align photos using the following parameters.
WGS84 / UTM zone 28N and EGM2008 geoid.
I am using reference preselection. The coordinates of the photocenter were calculated using PPK.
I tryed fisrt with version 1.4 and then 1.35.
I have the same problem with other flights, disproportionate errors using version 1.4
What I am doing wrong in 1.4 configuration?
Thank you
-
Hello topo31,
Were you using the same workflow in both cases? When the markers where introduced: before or after the initial alignment?
Have you performed the optimization in any or both of the projects?
-
Hello Alexey
The steps I follow are:
1. add Photos
2. convert WGS84(EPSG 4326)/Ellipsoidal height of photocenters to WGS84/UTM28N and EGM2008 height.
3. align Photos. Referece preselection, low Accuracy. Key point 10000 / tie point 2500. Adaptive camera model fitting.
4. Insert markers
5. align Photos. Referece preselection, low Accuracy. Key point 10000 / tie point 2500. Adaptive camera model fitting.
It is at this time when there are errors in version 1.4 but not in 1.35 one.
The models are not optimized.
Thank you
-
Hello topo31,
First of all, there's no point of alignment the dataset twice. Instead of step 5 I suggest to use Optimize Cameras option.
However, in the version 1.4 at step 5 are you using Reset Current Alignment option in the Align Photos dialog? And is it possible to provide the report files from both projects (processed in version 1.3 and 1.4)?
-
Hello Alexei,
I have aligned again using hight accuracy/kei point 40000/tie point 10000.
The results are similars in version 1.4 and 1.35
There are only diferences in low accuracy.
Until today I used to do a low resolution alignment first. Then once it was verified that there were not gross errors, I did the alignment in high resolution and finaly I optimized the cameras.
-
Hello Sören,
The output of reflectance calibration is in 16 bit format according to MicaSense recommendations:
https://support.micasense.com/hc/en-us/articles/215460518-What-are-the-units-of-the-Atlas-GeoTIFF-output-
value 65535 means 200% reflectance and 32768 - 100% reflectance, so if you need to get values from 0 to 1, it's necessary to use Raster Calculator dialog, create several output channels (according to the number of input channels) and use the ratio equation for each band, then during export use Index Value option in the Raster Transform section...
Hi Alexey,
Thank you very much for your reply! Everything is working fine after I considered your notes.
Now, I have a few new question, which come up during processing my data with Agisoft:
1. How to import rotation data (roll-pitch-yaw angle) of Sequoia: For GPS data it works automatically but for rotation data the columns remain empty. Do you have any solution (e.g. batch script) to read from EXIF and to import rotation data for all images (TIF)?
2. Are the rotation data actually used during processing the image data?
3. Is there an extension planned to export radiance values (in physical units)?
4. Maybe not entirely correct for this form but: When I import thermal image data (TIF) from FLIR Tau2 the images always stay black after “Set Brightness”. With black images, it is not possible to work using Agisoft. What would be your solution here to stretch correct without losing EXIF data (with GPS, etc.)? Should I send you a few samples?
Cheers,
Sören
-
Hello Sören,
Can you please check if turning on "Load camera orientation from XMP meta data" option in the Advanced Preferences tab solves the problem for loading the angle information to the Reference pane from Sequoia camera?
The orientation angles would be used for optimization of the camera alignment, so if you are loading them, then they should be correct and the "Camera accuracy (deg)" parameter should be set accordingly to define the precision of the measurements.
-
Hello Sören,
Can you please check if turning on "Load camera orientation from XMP meta data" option in the Advanced Preferences tab solves the problem for loading the angle information to the Reference pane from Sequoia camera?
The orientation angles would be used for optimization of the camera alignment, so if you are loading them, then they should be correct and the "Camera accuracy (deg)" parameter should be set accordingly to define the precision of the measurements.
Thank you! Now it works and I am really happy :-)
Is there a chance to change the columns of pitch/roll and the prefix appropriate if wrong captured (due to another orientation as recommended) in Agisoft?
Cheers,
Sören
-
Hi guys.
I have a sequoia camera and i use a sequoia target (AIRINOV)
I use this help for camera reflectance -> http://www.agisoft.com/pdf/PS_1.4_(IL)_Refelctance_Calibration.pdf
But i don't have any result on camera reflectance calibration.
Help me please!!! i use Agisoft 1.4.2
-
Hello Leo7best,
Have you applied the masks to the calibration images and input the reflectance values for the source bands?
-
Hi Alexey.
Have you applied the masks to the calibration images?
1. Yes, I have applied the mask, but i don't had any result. (view my Attachment)
input the reflectance values for the source bands?
2. where I can add these values? Albedo value?
--
I don't found my .CSV target
-
Hello Leo7best,
You need to input the reflectance values for each band in the right-side frame of the calibration dialog before pressing OK button. If you have only four values you can input them manually - no need to load CSV (usually CSV is a long file with two columns: wavelength and reflectance).
-
Hi alexey.
Where I find these reflectance values?
- Albeto values?
- Standard Wavelength values (nm)?
Pix4d Ag automatically find these values..
--
sensitivity value?
-
Hello Leo7best,
I mean these values. If you have any information about the panel, they could be called albedo values.
-
Hello Leo7best,
I mean these values. If you have any information about the panel, they could be called albedo values.
Hi Alexey
I have a Albedo values but it's a standard value.
I have attached 2 Screenshot:
With Calibration
Without Calibration
I don't find the solution..
-
Hello Leo7best,
It seems that something has changed, so the calibration worked with the input data.
Now you can process the data as usual and generate the orthomosaic using the calibrated values.
-
Hello Leo7best,
It seems that something has changed, so the calibration worked with the input data.
Now you can process the data as usual and generate the orthomosaic using the calibrated values.
Hi alexey
Perfect!! I have Applicated brightness correction for 1000%
--
I have two questions:
1. do I always have to apply the Albedo standard values?
2. I have correctly applied the mask calibration images? what are correct Mask 1 or Mask 2?
-
Hello Leo7best,
You need to apply the values that are specific to the calibration panel that is used.
As for the masking - the second mask (mask_2) is correct. Mask should cover everything except the calibration panel.
-
Hi Alexey
Ty for reply.