Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Bzuco

Pages: 1 ... 12 13 [14] 15 16 17
196
General / Re: Disappointing performance results on new MacBook Pro M1 Max
« on: November 02, 2021, 11:56:11 AM »
@bgreenstone
If I look on mac pro 2013 specifications, the best model(8core) has 2x AMD FirePro D700 graphics. Each has 2048 stream processors, @850MHz, processing power 3.5 TFlops/s(7 TFlops/s from both GPUs).
If I look on macbook pro M1 max specifications, 32core GPU has 4096 stream processors, @1296MHz(peak clock), processing power 10.4 TFlops/s.

32 compute units you see in metashape are streaming multiprocessors and each one has 128 individual stream/shader/execution/... units.

Summary:
OLD: 2x 2048 units, 850MHz and total computation power 7 TFlops.
NEW: 1x 4096 units, 1000-1296MHz and total comp. power 10.4 TFlops.

197
General / Re: How to convert black and white image to colour
« on: November 01, 2021, 12:55:21 PM »

198
General / Re: Need a solution to correct orthophotos issues
« on: October 27, 2021, 11:47:55 AM »
I am not sure if I understand all what you want, but
you can try different blending modes OR retouch original scanned photos in photoshop OR create orthophoto from point cloud, where you can delete points with black color.

199
General / Re: Trouble with Textures
« on: October 27, 2021, 11:19:16 AM »
@h_ogata
It is always good practise to check if all cameras are correctly aligned.
You can do that only visually...enable show cameras icon and look at the cameras if their positions are at drone flying trajectory.
You will notice immediatelly if some camera has wrong position or orientation.

200
General / Re: Fixing/avoiding poor image quality?
« on: October 27, 2021, 04:16:42 AM »
@Steve
He is talking about a completely different problem that is not related to the problem you are describing.

201
General / Re: Fixing/avoiding poor image quality?
« on: October 26, 2021, 09:48:40 PM »
Hi, you need to figure out if the blurriness is from "motion" blur or "out of focus" blur. Can you post one best and one worst image?
In EXIF info of your photos you can see exposure settings, it will tell you why was the photo bad.

Some advice:
-set white balance to cloudy/sun/custom, do no use auto
-if autofocus does not work well, set manual focus and try to keep same height, or refocus manually if needed during flight when height from ground will changed.
-choice shooting mode to aperture priority and set: ISO 100,  lowest aperture number and check what is the resulting shutter speed. If it is 1/200s it should be enough for stable not fast flight...it depends also on flight altitude. If the resulting speed is way higher(1/500s...1/800s) you can try increase aperture number to 4 / 4,5 / 5.6 and check if shutter speed is at least 1/200s.

Magic behind lens aperture:
Low aperture values produces more sharper images in the middle, but can cause unwanted chromatic aberation effect on sides/edges, higher values produces less sharper images, but without chromatic aberation. Yout need to find some sweetspot for your lens where you will be satisfied in terms image quality, blurriness/sharpness. Maybe final aperture value will force you to increase ISO, because resulting shutter speed will be not enough for motion blur less images......playing with these 3 exposure parameters is important for photogrammetry.
Fixing this issues in post production does not make much sense.

202
General / Re: Processing a massive underwater photoset without GPS
« on: October 23, 2021, 06:12:30 PM »
OK, understand, sorry for slightly hijacking this thread.

I just made quick test on 200 photo set(house and backyard from ground) with combination of params and resulted time.
"Not all aligned" - model looks good, but not all cameras were marked as aligned.
So +1 to know, when and why use exact methods and for what scenario.

203
General / Re: Processing a massive underwater photoset without GPS
« on: October 23, 2021, 03:36:00 PM »
@Mariusz_M
Generic preselection will do additional quick calculation after detecting points phase.
It is usefull for next phase - matching points between photos, as it will discard photo pairs from process which don't have any overlap/common features. Without gener. presel. matching points would be between each photo with each other( num photos * num photos tasks).
As for the "estimated" and "sequential" options, I am not 100% sure, from my perspective is just let metashape know how we were taking photos, if in chaotical order(orbiting some object), or in sequential(e.g. long wall from left to right). I could be wrong and it is more behind this options.

@CheeseAndJamSandwich
Your underwater work is interesting.
Are you using generic preselection? and what is your hardware? 2.5days for one big alignment seems to be too long. I would expect several hours of alignment time for ~25000 big photoset.
Did you try decrease key point limit to e.g. just 20k?
What precision you need in the end?(pixels on texture per cm/m...or pointcloud points per cm/m).
Are you many times also orbiting aroung rocks when taking photos, or is everything mostly from top down?
And last funny question: can you swim faster to better use 2s interval on gopro? :D

204
@andyroo
One fact, that right after you drag and drop photo files into project, metashape is checking all files(quite quick process), but on background all files are read from disk and automatically stored in RAM as cached files.
So my point is, that if you start align process right after checking process in dialog is done, you have few photos in RAM and rest still on disk.
You have 3 GPUs, which is enormous pressure on reads from disk.
Also I dont know how much RAM your system have, how big are your photos(megabytes) and if your amount of RAM is able to absorb all 100k files for quick feeding your 3 GPUs.
Photos stored on HDD instead of SSD can cause also significant slowness if photos are not already cached in RAM.

What you describing "1.5s delay every 20 photos" could be cause by facts I mentioned, but also some changes between 1.4/1.5/1.7 versions...can't tell more.
You can check how much your disk is reading during point detection and what is the value of cached RAM in task manager.
You also should be able to see fluctuating PCIE bus usage on each GPU in gpuz utility during delays.

I made quick test on small 200 photo set. When I started align process after all reads from disk ended, the speed of point detection was constant and quick. When I started align process right after drag and drop, point det. on half photos was quick and then the process was slower until the end, because my HDD was stil lreading photos.

205
Steve:
You can switch grid plane XY/XZ/YZ in preferences - appearance. Also transparency, color and "density"(just how many lines each meter, not how many lines per meter  :-\)
The rotation ball gizmo is not perfect, but you can manipulate with mouse even outside of the gizmo and with better sensitivity, so then orientation of RGB circles are not so important.
It is good also set keyboard shortcuts for object move, scale, rotation, ...for speed things up.

With these tips you should be able to set the scale, origin[0,0,0] and proper orientation of your model, but not with 100% precision. In the past I was able to set orientation of house with precision under 0.5 degree. I was also using edges of my screen  ;D . Grid and it's plane settings I discovered only yesterday reading latest posts here  ::)

So yeah, the ability to set precise scale and orientation of model is missing in standard version, therefor after creating dense cloud I am leaving metashape and rest of my work is done in other softwares, but for it's price it is still amazing for my purposes.

206
General / Re: Seabed mapping -> alignment of 100.000 pics
« on: October 20, 2021, 11:34:14 AM »
Do not know what is capable pro version of metashape, but using cloudcompare I would align two neighbour patches individually, then apply resample(it should suppress duplicates), then merge/align next patch and do resample, ...etc.

Or I would delete the common parts from one patch and keep that part only in second patch of the neighbours.
I do not know what exactly is on your screenshot and how it should look like without issue.

207
Yes, 6 decimal places can be overkill, but for few thousands of vertices it does not matter.
The OBJ file is pure text file, so you can put "as many" decimal places as you want, because in text representation of number value you are not limited like in computer representation.
Each decimal place in OBJ cost you 3 bytes per one vertex, so more vertex - more bytes - more time needed to export/import.
Value 4-6 could be useful when taking photos of very large area and tiny object in one photo set(this will probably never happen).

JPG sould have 100% quality, but you can use PNG or TIFF if you want to be 100% sure about image quality.

208
General / Re: Mesh from Dense Point Cloud vs. Depth Maps
« on: October 18, 2021, 11:29:05 PM »
@bgreenstone

Blobbiness issue: The blob mesh should have very large polygons.  If you export the mesh to meshlab, there is function "select faces with edges longer than..." with edge threshold slider and preview options. You should be able to select and delete faces of the blob only, because the rest of the mesh should have much smaller polygons. Then export/import back to metashape. Maybe this will help you until it will be corrected somehow in metashape.

As I know from cloudcompare/meshlab, when creating mesh from points using poison surface reconstruction function, one of it's parameter/options is creating this blob shape unwanted mesh, other options are creating just flat surface at the edges of the object. Maybe metashape is using this method or something similar. It also does not happend everytime and on every object, but I dont know exactly what is needed to prevent the blob effect.

209
General / Re: Mesh from Dense Point Cloud vs. Depth Maps
« on: October 18, 2021, 05:35:57 PM »
Maybe select mesh by element function in 3D apps.
Slelect biggest mesh elements and then invert selection and delete the mess.
Or select vertices by vertex color, convert selection vertex => faces/elements and delete.
Or simply create mesh from dense cloud, where you can select points by color or some other metrics. Use metashape/cloudcompare for creating mesh and filtering points, what will be better for you or what will create better result. I think if dense cloud is good and dense enough, there should be not so difference quality in mesh compared with pure depth maps workflow.

210
It is precision of vertex position(num of decimal places).
Value 6 - eg. 12.451205 meters.
Value 3 should be enough (milimeters precision), but you can keep it at 6.

Pages: 1 ... 12 13 [14] 15 16 17