Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - aloerch

Pages: [1]
Bug Reports / Re: Generate Ortho - Big Tiff Error
« on: May 16, 2020, 01:18:30 PM »
Hi Alexey,

Thank you again for working on this. Can you please confirm that the recently released 1.6.3 version fixed this issue?


Bug Reports / Re: Generate Ortho - Big Tiff Error
« on: April 09, 2020, 10:50:52 PM »
Hi Alexey,

Thank you for looking into this. Unfortunately, we are prohibited from sharing this particular dataset by the satellite vendor (Digital Globe). But, we can work around the problem by generating and outputting several 4 band images instead of a single 9-band image. It would be helpful though if you might look into the libtiff usage for large files in the generate Ortho workflow.

Bug Reports / Re: Generate Ortho - Big Tiff Error
« on: April 09, 2020, 04:04:53 PM »
Modified previous post with more details.

Bug Reports / Re: Generate Ortho - Big Tiff Error
« on: April 09, 2020, 03:57:39 PM »
Hi Alexey,

Here are the details you asked for, followed by the general workflow I'm following:

Agisoft MetaShape Professional 1.6.2 Build 10247 (64 Bit)
Image dimensions: 23996 x 20888 (9 bands)
Image file size: 9,022,144,080 bytes
Linux Mint 19.3 64-bit
128 GB Ram

Take a single Worldview 3 (WV3) satellite image and generate an orthorectified image using an imported DEM.

General Workflow:
1. Load the pan-sharpened WV3 9-band image
2. Load the same image with only 1 band, the panchromatic band
3. Check Camera Calibration settings to ensure both are listed as RPC type
4. Add ground control points and optimize cameras
5. Import a LiDAR-based DEM
6. Disable the panchromatic-only image
7. Generate Orthomosaic using the imported DEM

The failure occurs on step #7 with the error I posted previously. That error appears to show that Generate Orthomosaic's LibTIFF implementation is not able to generate a tiff that is larger than 4GB.

If I use the exact same input image but instead of 9-bands I reduce it to 4-bands, thus reducing the input image's file size, then Generate Orthomosaic works without any errors, and I can then export the orthomosaic without any errors.

Also, if I use the exact same input image but during "Generate Orthomosaic" I set the output pixel size to 1.6 m instead of the native 0.3 m, that also works without errors and i can then export the orthomosaic without any errors.

Ideally, I would be able to generate the orthomosaic at the native resolutions (0.3m) with the given number of bands (9), but the error seems to indicate that LibTIFF can't write the generated file(s) because they are larger than 4GB.

Now that Agisoft Metashape supports satellite imagery, it would be helpful to be able to generate the orthomosaic using BigTiff when necessary... I am guessing that is the problem.

Bug Reports / Generate Ortho - Big Tiff Error
« on: April 06, 2020, 10:30:53 PM »
This problem is occurring during the "Generate Ortho" step, not the export ortho step.

I have a WorldView 3 satellite image that I am trying to generate an orthoimage for using an imported DSM. The image has 8 bands, and a pixel size of 0.3 m.

If I choose "Generate Ortho" and try to generate the ortho with the 0.3 m pixel size, I get this error (from the logs):

Code: [Select]
2020-04-06 12:16:27 Generating orthomosaic...
2020-04-06 12:16:27 initializing...
2020-04-06 12:16:27 Analyzing DEM...
2020-04-06 12:16:27 estimating tile boundaries... done in 7.39302 sec
2020-04-06 12:16:35 generating 24192x21504 orthomosaic (9 levels, 0 resolution)
2020-04-06 12:16:35 selected 1 cameras
2020-04-06 12:16:35 saved orthomosaic data in 0.006465 sec
2020-04-06 12:16:35 saved camera partition in 0.000693 sec
2020-04-06 12:16:35 scheduled 1 orthophoto groups
2020-04-06 12:16:35 loaded camera partition in 0.000307 sec
2020-04-06 12:16:35 loaded orthomosaic data in 0.000518 sec
2020-04-06 12:16:35 Orthorectifying images...
2020-04-06 12:16:35 Orthorectifying 1 images
2020-04-06 12:18:24 19jul24184818-p2as-012267371010_01_p001: 24192x21504 -> 23831x20773
2020-04-06 12:19:45 libtiff error: Maximum TIFF file size exceeded
2020-04-06 12:19:45 libtiff error: Maximum TIFF file size exceeded
2020-04-06 12:19:46 Finished processing in 198.518 sec (exit code 0)
2020-04-06 12:19:46 Error: TIFFWriteTile: unexpected error: memory stream

However, if I change the desired pixel size to 1.6 m, the orthomosaic generates properly, just at a reduced resolution.

I know I can select "BigTiff" during the export orthomosaic process, but this does not seem to be possible with the Generate Orthomosaic process. Is there a "Tweak" parameter I can add, or can this be fixed? I need the output orthomosaic to be the same pixel size as the original image (0.3 m).

Bug Reports / Re: Deleted Dense Points Still Used for Mesh Generation
« on: August 31, 2018, 09:41:00 PM »
Thanks Alexey, that makes sense! I had no idea that the visibility consistent mesh option worked that way!

Bug Reports / Deleted Dense Points Still Used for Mesh Generation
« on: August 31, 2018, 12:06:14 AM »

I know this has been posted before for earlier versions, but the solutions suggested did not work. I've isolated the problem somewhat, and found a workaround, so I'm sharing it here, and hopefully it can be fixed in an upcoming version.

Version: Agisoft PhotoScan 1.4.3 (Linux and Windows both have this issue)

Problem description:

1. Delete some undesirable points from the dense point cloud.
2. I've tried both "Compact point cloud" and tried it without that option, the results are identical.
3. Choose "Workflow -> Build Mesh". Surface type Arbitrary, Reuse depth maps unchecked.
4. The resultant mesh has polygons for the entire dense point cloud, including for the deleted points.
5. To verify this, I tried deleting fully 1/2 of the dense points in the scene, and the resultant mesh still had the entire area including deleted points. I also tried deleting the sparse points, in case they were also being used for some reason, That still did not work. I tried this in linux and windows versions of photoscan and the problem exists in both. I also tried saving the project as a new file, and that did not work.

What worked:
Instead of using "Workflow -> Build Mesh" I used "Workflow -> Batch Process -> Build Mesh". I set the source type to arbitrary again, and Source Data to Dense Cloud. The resultant mesh did NOT include the deleted dense points.

This leads me to believe that "Workflow -> Build Mesh" is not using the edited Dense Cloud, but rather it is using the full dense cloud. By contrast, the batch process version does seem to use the edited dense cloud.

I hope this helps others, and I hope this gets resolved.

Bug Reports / Re: can't export Undistort Photos
« on: November 25, 2017, 02:51:10 AM »
I can confirm this problem with version 1.3.4 build 5067 . I tried in both Windows and Linux, and in both cases, the default template "{filename}.{fileext} would try to save my undistorted Sony a6000 jpegs (filenames contain capital JPG extension) with a capital JPG. The error was "can't save image" and this error occurs on new project, and projects that I had previously exported undistorted images for.

The solution, as the OP has posted, was to change .{fileext} to .jpg (with jpg in lower case). It seems at the moment that Photoscan is not able to save the images if the original filenames have a capital JPG extension, when the default undistort template is used.

Bug Reports / Re: Blue Screen of Death - Calculating Color Correction
« on: July 30, 2017, 03:35:47 AM »
Thank you for the reply. I realize it's been awhile since I posted the initial problem. What I can say now is this:

When generating the orthomosaic with color correction enabled in Photoscan, I no longer get a BSOD (since the latest NVIDIA driver update) but Photoscan does still crash/close... but only in Windows 10. In Linux Mint, this problem does not occur (I have my GPU's enabled there too). I've also run a memory test from my bios, and everything passes. The problem of BSOD in Windows 10 started at the same time that another Nvidia/Photoscan related problem was corrected by a beta update to Photoscan, except now it's just the Photoscan crashing and not Windows.

For now, anytime I need to create an orthomosaic, I reboot my PC into linux.

Bug Reports / Blue Screen of Death - Calculating Color Correction
« on: June 27, 2017, 07:36:31 AM »
This problem has been occurring with version 1.3.2 and the previous version (1.3.1), but I'd never had it previously.

In Windows 10 CU, all of my processing completes fine until I get to the build ortho step. At about 13% of the color correction process, my computer gives me a BSOD. This is the only application and instance in which I've seen a BSOD since Windows XP, and it only occurs during this step in photoscan. This is happening with 3 different projects I've tried.

In Linux Mint 16.04, this same project files complete the color corrections without any issues. Both the Windows and Linux installations have the most current Nvidia drivers for my 2 GTX 1080 Ti's. My CPU is a Xeon 14 core, and I have 128Gb RAM.

I didn't have logging turned on with my Windows installation, and right now I'm processing a different dataset, in Linux.

When I get a chance later this week, I'll try to rerun the color correction in windows with logging on, if that's helpful for you.

General / Re: Computer for large mapping project
« on: May 26, 2017, 03:05:25 AM »
Here's my experience:

I've processed UAV and manned imagery using Photoscan. For UAV imagery, it's typically a Sony a6000 (similar in image specs to the qx1) and for manned imagery 2 nikon d810's (one RGB, one NIR). The largest UAV area I've processed has been 2 sq km at 1cm GSD and 1,900 images. The largest manned-aircraft project I've processed in Agisoft was 150 sq km, at 8cm GSD with 14,000 images processed together (7K RGB, 7K NIR). I'm using a Puget Systems Genesis model PC with 128GB Ram (more would be better) and 2 GPU's (eVGA 1080Ti and an eVGA 1080FTW).

I've not had problems with processing these datasets (up to 14,000, 36 Megapixel images) in Agisoft, although I do have recommendations for workflows to smooth the process along.

IF your are using RGB and NIR images, you will want to create your camera model calibrations separately before starting step 1 below, if you're using just RGB, then jump right in.

1. Align ALL of your images in the same chunk if your hardware will support it. Mine supports at least 14,000 images being aligned together.

2. Once the alignment is complete, take care of your ground control, tie-point selections, etc. Optimize the camera positions (if you pre-calibrated RGB and NIR cameras, this step won't affect the camera model calibrations).

3. Split your project into chunks if your project is huge or if you are going to use the "High" quality dense point cloud generation. This step, ie. number of chunks, is dependent on what your hardware can handle. Splitting your images into chunks at this point ensures that your final dense point clouds and meshes/models will remained perfectly aligned to one another. For the 14,000 images project, 7,000 were RGB which I split into 2 chunks, and 7,000 were NIR which I split into an additional 2 chunks, so 4 chunks total.

4. Use batch processing to run your dense point cloud processing on each chunk (not including the original alignment chunk that had all of the images). At this point, in batch processing, you can also set up the classify ground points option, which will perform better on smaller chunks rather than one huge chunk.

5. After the batch processing of dense clouds completes, you can merge the chunks without needing to "align" them as they are already pre-aligned.

6. Proceed to create your DSM, DEM, and Ortho

So, the chunking is pretty much the way to go if your datasets are huge and/or if you want really high density point clouds. As long as the chunks are pre-aligned, there's really no problem. Other applications like Menci APS and 3DFlow Zephyr perform chunking automatically after image alignment in the background (batches) in order to accomplish processing of large datasets on limited hardware. This might be something Agisoft would consider implementing as an option, since chunking is both powerful and clunky. Still, to answer your question, Agisoft has no problem with very large datasets.

$2,000-$3,000 for your budget though might not get you hardware anywhere near what I'm using so.... use a greater number of chunks :-P

General / Re: Slow alignment time - compare between computers
« on: September 21, 2016, 02:15:49 AM »

I'm going to go out on a limb here and suggest that what you are experiencing is probably a difference in clockspeed of the desktop CPU versus laptop CPU. Here's a bit of background:

1. Photoscan does not use the GPU's for the alignment process, that is 100% CPU based.
2. You do not specify the number of cores or clockspeeds of either the laptop or desktop, but because the alignment process is 100% CPU based, this could be the reason for the difference.

Also, it could just be that your anti-virus was running on the desktop and not the laptop during processing... or updates were being downloaded/installed, or the moon was passing through the 8th house of Jupiter (lol)?

General / Re: Camera Station Orientations and View Angles
« on: July 04, 2016, 06:43:53 PM »
Never mind, I believe I figured it out. "View Estimated" under Reference, or better yet Export estimated.

General / Camera Station Orientations and View Angles
« on: July 03, 2016, 02:11:42 PM »
Hi, I have a question about the camera station orientations and view angles after the alignment process.

What I am trying to do is pick a point on Image 1, and find the matching single point on Image 2, and based on the Camera Station orientations for both of these images, find the difference in the view angles between them.

In order to do this, I need to know the camera station orientations in terms of angle of view for the images' centers. Obviously PhotoScan calculates this when doing photo alignment but is it accessible to the user as a table of values or somewhere else?

Thanks in advance for your help.

General / Windows vs Linux for PS
« on: June 15, 2016, 11:11:56 AM »

I just purchased a pretty high-end system specifically for two tasks. The first is and most important is creating high-res DSMs and orthos with PhotoScan across large datasets (500-5000 images) and the second is for creating and training Deep Belief Networks (machine learning neural networks, in my case specifically for image feature/object classifications).

Photoscan obviously works well with Windows (I've been using it with Windows for years) and the DBN stuff is best supported with Linux. I have tried to search for information on these forums and Google about how Photoscan's processing times/efficiency compares between Windows and Linux with no luck at all.

Does anyone have experience or actual benchmark results of processing data where the only change between systems was the OS? Does Photoscan run better under Linux than Windows?

Here's the system I'll be using:

Intel Xeon E5-2690 V4 2.6GHz Fourteen Core 35MB 135W CPU
4 x Crucial DDR4-2133 16GB ECC Reg. Ram
3 x NVIDIA GeForce GTX 970 4GB Video Card
Samsung 850 Pro 512GB SATA 6Gb/s 2.5inch SSD Hard Drive
2 x Western Digital RE 6TB SATA 6Gb/s Hard Drive

Pages: [1]