Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Bzuco

Pages: 1 ... 8 9 [10] 11 12 13
136
General / Re: Need a solution to correct orthophotos issues
« on: October 27, 2021, 11:47:55 AM »
I am not sure if I understand all what you want, but
you can try different blending modes OR retouch original scanned photos in photoshop OR create orthophoto from point cloud, where you can delete points with black color.

137
General / Re: Trouble with Textures
« on: October 27, 2021, 11:19:16 AM »
@h_ogata
It is always good practise to check if all cameras are correctly aligned.
You can do that only visually...enable show cameras icon and look at the cameras if their positions are at drone flying trajectory.
You will notice immediatelly if some camera has wrong position or orientation.

138
General / Re: Fixing/avoiding poor image quality?
« on: October 27, 2021, 04:16:42 AM »
@Steve
He is talking about a completely different problem that is not related to the problem you are describing.

139
General / Re: Fixing/avoiding poor image quality?
« on: October 26, 2021, 09:48:40 PM »
Hi, you need to figure out if the blurriness is from "motion" blur or "out of focus" blur. Can you post one best and one worst image?
In EXIF info of your photos you can see exposure settings, it will tell you why was the photo bad.

Some advice:
-set white balance to cloudy/sun/custom, do no use auto
-if autofocus does not work well, set manual focus and try to keep same height, or refocus manually if needed during flight when height from ground will changed.
-choice shooting mode to aperture priority and set: ISO 100,  lowest aperture number and check what is the resulting shutter speed. If it is 1/200s it should be enough for stable not fast flight...it depends also on flight altitude. If the resulting speed is way higher(1/500s...1/800s) you can try increase aperture number to 4 / 4,5 / 5.6 and check if shutter speed is at least 1/200s.

Magic behind lens aperture:
Low aperture values produces more sharper images in the middle, but can cause unwanted chromatic aberation effect on sides/edges, higher values produces less sharper images, but without chromatic aberation. Yout need to find some sweetspot for your lens where you will be satisfied in terms image quality, blurriness/sharpness. Maybe final aperture value will force you to increase ISO, because resulting shutter speed will be not enough for motion blur less images......playing with these 3 exposure parameters is important for photogrammetry.
Fixing this issues in post production does not make much sense.

140
General / Re: Processing a massive underwater photoset without GPS
« on: October 23, 2021, 06:12:30 PM »
OK, understand, sorry for slightly hijacking this thread.

I just made quick test on 200 photo set(house and backyard from ground) with combination of params and resulted time.
"Not all aligned" - model looks good, but not all cameras were marked as aligned.
So +1 to know, when and why use exact methods and for what scenario.

141
General / Re: Processing a massive underwater photoset without GPS
« on: October 23, 2021, 03:36:00 PM »
@Mariusz_M
Generic preselection will do additional quick calculation after detecting points phase.
It is usefull for next phase - matching points between photos, as it will discard photo pairs from process which don't have any overlap/common features. Without gener. presel. matching points would be between each photo with each other( num photos * num photos tasks).
As for the "estimated" and "sequential" options, I am not 100% sure, from my perspective is just let metashape know how we were taking photos, if in chaotical order(orbiting some object), or in sequential(e.g. long wall from left to right). I could be wrong and it is more behind this options.

@CheeseAndJamSandwich
Your underwater work is interesting.
Are you using generic preselection? and what is your hardware? 2.5days for one big alignment seems to be too long. I would expect several hours of alignment time for ~25000 big photoset.
Did you try decrease key point limit to e.g. just 20k?
What precision you need in the end?(pixels on texture per cm/m...or pointcloud points per cm/m).
Are you many times also orbiting aroung rocks when taking photos, or is everything mostly from top down?
And last funny question: can you swim faster to better use 2s interval on gopro? :D

142
@andyroo
One fact, that right after you drag and drop photo files into project, metashape is checking all files(quite quick process), but on background all files are read from disk and automatically stored in RAM as cached files.
So my point is, that if you start align process right after checking process in dialog is done, you have few photos in RAM and rest still on disk.
You have 3 GPUs, which is enormous pressure on reads from disk.
Also I dont know how much RAM your system have, how big are your photos(megabytes) and if your amount of RAM is able to absorb all 100k files for quick feeding your 3 GPUs.
Photos stored on HDD instead of SSD can cause also significant slowness if photos are not already cached in RAM.

What you describing "1.5s delay every 20 photos" could be cause by facts I mentioned, but also some changes between 1.4/1.5/1.7 versions...can't tell more.
You can check how much your disk is reading during point detection and what is the value of cached RAM in task manager.
You also should be able to see fluctuating PCIE bus usage on each GPU in gpuz utility during delays.

I made quick test on small 200 photo set. When I started align process after all reads from disk ended, the speed of point detection was constant and quick. When I started align process right after drag and drop, point det. on half photos was quick and then the process was slower until the end, because my HDD was stil lreading photos.

143
Steve:
You can switch grid plane XY/XZ/YZ in preferences - appearance. Also transparency, color and "density"(just how many lines each meter, not how many lines per meter  :-\)
The rotation ball gizmo is not perfect, but you can manipulate with mouse even outside of the gizmo and with better sensitivity, so then orientation of RGB circles are not so important.
It is good also set keyboard shortcuts for object move, scale, rotation, ...for speed things up.

With these tips you should be able to set the scale, origin[0,0,0] and proper orientation of your model, but not with 100% precision. In the past I was able to set orientation of house with precision under 0.5 degree. I was also using edges of my screen  ;D . Grid and it's plane settings I discovered only yesterday reading latest posts here  ::)

So yeah, the ability to set precise scale and orientation of model is missing in standard version, therefor after creating dense cloud I am leaving metashape and rest of my work is done in other softwares, but for it's price it is still amazing for my purposes.

144
General / Re: Seabed mapping -> alignment of 100.000 pics
« on: October 20, 2021, 11:34:14 AM »
Do not know what is capable pro version of metashape, but using cloudcompare I would align two neighbour patches individually, then apply resample(it should suppress duplicates), then merge/align next patch and do resample, ...etc.

Or I would delete the common parts from one patch and keep that part only in second patch of the neighbours.
I do not know what exactly is on your screenshot and how it should look like without issue.

145
Yes, 6 decimal places can be overkill, but for few thousands of vertices it does not matter.
The OBJ file is pure text file, so you can put "as many" decimal places as you want, because in text representation of number value you are not limited like in computer representation.
Each decimal place in OBJ cost you 3 bytes per one vertex, so more vertex - more bytes - more time needed to export/import.
Value 4-6 could be useful when taking photos of very large area and tiny object in one photo set(this will probably never happen).

JPG sould have 100% quality, but you can use PNG or TIFF if you want to be 100% sure about image quality.

146
General / Re: Mesh from Dense Point Cloud vs. Depth Maps
« on: October 18, 2021, 11:29:05 PM »
@bgreenstone

Blobbiness issue: The blob mesh should have very large polygons.  If you export the mesh to meshlab, there is function "select faces with edges longer than..." with edge threshold slider and preview options. You should be able to select and delete faces of the blob only, because the rest of the mesh should have much smaller polygons. Then export/import back to metashape. Maybe this will help you until it will be corrected somehow in metashape.

As I know from cloudcompare/meshlab, when creating mesh from points using poison surface reconstruction function, one of it's parameter/options is creating this blob shape unwanted mesh, other options are creating just flat surface at the edges of the object. Maybe metashape is using this method or something similar. It also does not happend everytime and on every object, but I dont know exactly what is needed to prevent the blob effect.

147
General / Re: Mesh from Dense Point Cloud vs. Depth Maps
« on: October 18, 2021, 05:35:57 PM »
Maybe select mesh by element function in 3D apps.
Slelect biggest mesh elements and then invert selection and delete the mess.
Or select vertices by vertex color, convert selection vertex => faces/elements and delete.
Or simply create mesh from dense cloud, where you can select points by color or some other metrics. Use metashape/cloudcompare for creating mesh and filtering points, what will be better for you or what will create better result. I think if dense cloud is good and dense enough, there should be not so difference quality in mesh compared with pure depth maps workflow.

148
It is precision of vertex position(num of decimal places).
Value 6 - eg. 12.451205 meters.
Value 3 should be enough (milimeters precision), but you can keep it at 6.

149
General / Re: Could MS use MORE RAM???
« on: October 15, 2021, 07:54:58 PM »
Alignment phase: 2/3 is done on gpu, so the limit is only VRAM. And overal alignment phase is not so RAM/VRAM hungry, so I guess no reductions in memory consumption were applied here at the expense of performance.

The depth maps generation phase is done on gpu and you need to feed GPU through PCIE slot(which is fast enough, so no bottleneck).
You can check during this phase if there are some reads from disk and if, what are the values in spikes, if just few MB/s or tens or hundreds of MB/s. If no significant reads, everything was in RAM  :P .

Next phase is filtering depth maps. In this phase RAM usage depends on few factors. How many depth maps you will filter together(BuildDenseCloud/max_neighbors parameter in tweaks) and of course what is the resolution quality.
Next more important factor for RAM requirements is what is the shape of the object you were photographing and where you were taking photos.
In scenario with one long wall and camera positions one next to each other pointing in the same direction...this is not RAM hungry, because there will be just few(3-4?) depth maps needed together in filtering process at the same time.
In scenario when you were orbiting around object and you take several hundreds of photos pointing in the ~same direction...this will be very hungry, because lot of depthmaps need to be filtered together at the same time.
It would be difficult to predict what scenario might occur and make MS smart enough in terms of RAM decision...if keep more data in RAM or temporarily put data on disk.

This is how I see RAM requirements...more in depth analysis only with MS programmers :D

The last phase of dense cloud generation is joining and storing final data on disk. Here you can speed up things if your project is on SSD instead of classic HDD. I am using HDD and often see, that MS is waiting on disk to write all dense data, but in the size of my projects it still doesn not matter. Your projects are different story :D.

150
General / Re: Could MS use MORE RAM???
« on: October 15, 2021, 04:58:00 PM »
Windows is automatically keeping last used files in RAM memory until memory is not needed for other heavy tasks. So in scenario you are copying photos from GoPro somewhere on disk, these files are automatically in RAM memory untill restart computer. And if you right after copying run MS and start processing, those files are reading from RAM. That is how windows automatic caching works and programming languages when are asking for some files from disk(first there is still lookup in RAM, and if not there, then read from disk).
If you come to processing in MS after restart PC, you still need to read photo files from disk first time, but any other following work will be using copies of those files in RAM.
Even when MS is creating some files during process and storing them in zip files on disk, those files are immediately in RAM.

You can check in task manager different memory values, total, available, cached, free. RAM memory is used more than just actual usage value.

Pages: 1 ... 8 9 [10] 11 12 13