Forum

Author Topic: Metashape in space, example with comet 67P Churyumov-Gerasimenko  (Read 4380 times)

sonno

  • Newbie
  • *
  • Posts: 34
    • View Profile
Dear colleagues from the Metashape user community,

On the occasion of the passage of comet C/2023 A3 (Tsuchinshan-ATLAS), I'd like to share an experience of using Metashape to study the nucleus of a comet.

This work with Metashape led to the publication of an article in the scientific journal Monthly Notices of the Royal Astronomical Society (MNRAS), Volume 531, Issue 2, June 2024, Pages 2494-2516, https://doi.org/10.1093/mnras/stae1290 “Detection and characterization of icy cavities on the nucleus of comet 67P/Churyumov-Gerasimenko” by Philippe Lamy, Guillaume Faury, David Romeuf, Olivier Groussin.

This study represents the first confirmation and characterization of SAPs (Subsurface Access Points) on a comet, i.e. cavities 20 to 47 m deep, with evidence of the presence of water ice. These cavities are areas of interest for a potential space probe that could gain direct access to the subsurface materials. We have also correlated the departure of a transient jet, linked to insolation at the bottom of one of these icy cavities, thanks to images and a 3D model associated with a thermal model.

I invite you to take a look at the short videos that accompany our article, but the main video is at the end of this message :

Visit of four cavities on the nucleus of comet 67P/Churyumov-Gerasimenko

- 3D anaglyph red-cyan version: https://www.youtube.com/watch?v=twsfRI52HZw
- 2D version: https://www.youtube.com/watch?v=ADDyB76qmBI

Vizualisation of the cavity jet activity of the nucleus of comet 67P

- 3D anaglyph red-cyan: https://www.youtube.com/watch?v=3z_W9npxTl4
- 2D VERSION: https://www.youtube.com/watch?v=VyRSdKIXY-Q

Here are a few tips for watching such a document in anaglyptic depth relief: It's best to watch in a dark room with no light reflections on the TV screen, and to stand 2 or 3 meters from the screen, well in the middle. You need well-filtered red-cyan glasses to separate the right/left eye channels, and a TV whose red R colors don't emit a little Green+Blue=Cyan and/or whose green V doesn't emit a little red (otherwise the TV injects the left image into the right eye, or/and it injects the right image into the left eye -> stereo incoherence for the brain, it “vibrates”), and finally turn up the sound and get good bass for the music I've selected for this videos.

Since the arrival of the ROSETTA probe near the comet and the first images from the OSIRIS cameras (August 2014), we've been working on creating stereoscopics views of comet 67P. This red-cyan anaglyph archive is available on the website of the French space agency, CNES https://rosetta-3dcomet.cnes.fr/?q=en/albums/yes . In particular Guillaume Faury (IRAP), who made the stereophotographic archive with me, and who is in fact the discoverer of Philae. Guillaume Faury had located Philae as early as March 2015 (and published in June 2015) by comparing images as a characteristic new very bright spot, well before the September 2, 2016 confirmation flyby at very low altitude (2.7 km) to fully resolve it.  As these are real digital CCD photographs taken in 2 or more stages with the extraordinary OSIRIS NAC and WAC CCD cameras, the result of the stereo-reproduced pairs is sometimes incoherent for our brains, as the comet rotates on itself and the shadows cast by the relief on its surface change and do not cover exactly the same area on the images for the left and right eyes (which is incoherent for our brains, as in real life, they receive information from the left and right eyes at the same time, synchronized but just 7 cm apart). Inconsistent areas of stereo restitution vibrate when the anaglyph is observed. The ROSETTA mission was not designed for stereophotography. We are looking for image pairs in the programmed series that correspond to best practice in stereophotography, i.e. a parallax of around 2° for foreground areas for stereo-reproduction on a computer screen or 4K TV (more like 1° for a large-screen cinema). It's the parallax between the 2 eyes that enables the brain to restore volume and depth through the differences between the 2 images received by our neural network. We have 2 eyes to be very precise when working with our hands, i.e. for our close-up actions (plug up one eye and try a precision action like playing house of cards).

Our team of astronomers is working on the astrophysical exploitation of comet images to understand their structure, relief and evolution, dynamics, etc. The 3D reconstruction of small bodies is fundamental to the study of their physical properties, starting with their volume, which gives access to their density and thus to the formation processes. The study of their surfaces (and those of planets) and evolutionary processes requires quantitative data (height of cliffs, depth of basins, craters, wells, etc.). This is particularly the case for thermal modeling (energy received and integrated by the body) to determine lighting conditions. From an operational point of view, placing a vehicle on these surfaces (site selection) or operating a rover requires 3D reconstruction and stereo vision. To understand the overall shape of small bodies, we need to: understand accretion and evolution processes, know the gravity field (navigation aid). Geomorphological studies of terrestrial planets, asteroids and cometary nuclei require: a better understanding of the geological processes involved, information on internal structure, localized morphological studies at high spatial resolution, understanding of global and local topography, particularly slopes.

It's obvious to most of you who are land surveyors, civil engineers, builders, archaeologists... but it's far from easy to determine a scale, measure precise heights, precise distances, and even locate the highest and lowest areas on a 2D shot if you don't already know the area and if you only have a few shots all taken from the same point of view. You need to integrate perspective, you need to find the right scale... Fortunately, great photogrammetric tools like Metashape are here to help!

With 2D imagery, it may even be possible to fail to recognize pits, cavities, caves and small structures, depending on the angle of view and the light threshold of the shots. This is the story of our discovery of icy cavities thanks to stereo photographic images from our CNES archive, then of their precise characterization thanks to the creation with Metashape of my high-resolution 3D model of the surface (132 million facets), of the correlation of the activity of one of the icy cavities with a jet of transient matter.

The presence of cavities or caves on SAP (Subsurface Access Points) solar system objects was envisaged but not confirmed on comets (see Planetary Caves: A Solar System View of Processes and Products J. Judson Wynne & al https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2022JE007303 ). We were able to locate 3 very small ones thanks to our stereophotographic archive. We were able to describe and characterize them thanks to the photogrammetric model produced with Metashape (see our MNRAS article available online). Better still, this 3D model was placed in a simulated orbit with the correct orientation of its axes of rotation, its angular velocity of rotation on itself... Using a thermal model and integration of the energy received at the bottom of the icy cavity, we correlated the departure of a transient jet. We calculated the energy accumulated over a full orbital revolution of the comet received by each facet of the three cavities and surrounding terrain was calculated 36 times per rotation (every 1240 s) to track short-term diurnal variations, and at 35 different times along the orbit separated by a variable time interval (from 100 days at aphelion to 15 days at perihelion) to track long-term seasonal variations. At each time step, the distance to the Sun, the orientation of each facet with respect to the Sun and the projected shadows were calculated using OASIS software (Jorda et al. 2010). As solar insolation is independent of surface composition, water ice sublimation was neglected in this model.

Let's turn now to the creation of the 67P model in Metashape :

As you know, there are best practices in photogrammetry, including shot placement, coverage, overlap, cross-trajectories and uniform, diffuse lighting conditions. However, the image archive from the ROSETTA mission does not always meet these optimal standards. Indeed, the images show many moving shadows and sharp contrasts, as space is lit only by the Sun, with no diffuse light source. Coverage around the comet is also uneven, with some areas never or very rarely illuminated, such as the bottom of deep circular troughs. What's more, the images are taken with different filters (IR, red, orange, blue, UV), resulting in variations in light profiles and contrasts. For example, the small lobe of the comet may cast a shadow on the large lobe. This poses a number of challenges when aligning images in the software.

During the ROSETTA mission, several series of images were dedicated to the creation of a 3D model, which resulted in a 40-million-facet model “ The global meter-level shape model of comet 67P/Churyumov-Gerasimenko ” , A&A 607, L1 (2017) F. Preusker & al https://www.aanda.org/articles/aa/full_html/2017/11/aa31798-17/aa31798-17.html . This team of astronomers had used on the order of 1500 images over 18 months to produce a very good 3D model of the comet, but on which our cavities did not appear. I therefore decided to try to use all the OSIRIS images from the mission to obtain a more detailed model of its surface. I pre-processed, prepared and selected 9186 images for the post-perihelion model and 8553 for the pre-perihelion model. In fact, I was able to calculate 2 models: before and after the passage to the closest point to the Sun, perihelion, which now makes it possible to highlight and quantify the areas that have changed a great deal, most of them affected by solar erosion. Our team is preparing a new publication on the subject for 2025-2026.

The model took a year to complete, with tests, disappointments, successes and great satisfaction. To get the best results, I had to zone out all the shadows cast on the surface and exclude them from the calculation on all the images. Another major problem was the comet's activity. The alignment procedure found SIFT key points in its coma “atmosphere”, with ejecta, jets, background stars... which obviously move and can't be very reliable reference points. Key point and Tie point filtering was crucial in this project for the alignment of over 18500 images loaded into the two pre- and post-perihelion chunks.

The results of the first image alignment contained several "comets" (several solutions of identified objects when there was only one) which had to be separated and isolated in other chunks (series of images). I aligned all the pieces separately, then merged and reassembled them into 1 single chunk.

I began by manually deleting the most grossly wrong Tie Points, iterating with the alignment recalculated by the optimization function. In a second phase, I used Metashape's mathematical filters “reconstruction uncertainty” < 75, “reprojection error” < 0.375 and “projection accuracy” < 10. Even after this filtering and optimization iterations, the tie-point comet object was surrounded by a few tie points above its surface. These were most likely low-level jets and ejecta, and a few artifacts. I didn't have much of a solution left with the mathematical filters, as I was afraid of removing too many tie points for the rest of the workflow and creating holes. I therefore proceeded to manually remove these tie points from the entire surface of the comet (like a barber shaving a face). This last operation was undoubtedly the most time-consuming, involving numerous manipulations all around the comet, to check it from all ortho and perspective angles and select the tie points I considered unreliable for deletion.

After all this image selection and separation, I finally used 7682 NAC images and 1504 WAC images, i.e. 9186 images, for the post-perihelion model. For the pre-perihelion model, I used 7835 images from the NAC camera and 718 images from the WAC camera, i.e. 8553 images. As there are far more post-perihelion images taken close to the comet, the post-perihelion model contains far more fine details of the comet surface.

You can find many illustrations of the various stages of this work in the PDF of my presentation at the Marseille observatory in early June 2024: here https://proam-gemini.fr/photogrammetrie-reconstruction-et-visu-3d-de-67p/ (in French).

To illustrate the result of the alignment, the following video shows a trip through the positrons of the shots of all the images, in different orbits of the probe. https://youtu.be/7aAs042FbZE

Once all the images had been aligned to the best of my ability, I launched the calculation of the dense cloud, and above all the depth maps, by pushing the default values of the Metashape configuration tweaks.

Once the depth maps had been calculated, I ran the mesh calculation to finally obtain 132 million facets for the post-perihelion model. In fact, the result was 133 million, but I hadn't seen another slag object inside the comet since the alignment. I was able to isolate and remove it using the Metashape function: Model -> Gradual selection -> Connected component size -> 99% .

I was able to geo-reference my post-perihelion 3D model to Preusker 2017's SPG SHAP7 marker, identifying numerous markers between the 2 models. Then I aligned my pre-perihelion model with the post-perihelion model using the SIFT key-point method.

Once the 3D mesh model had been obtained and optimized, I had to move on to the texture calculation phase. Given the number of images obtained under very different lighting conditions, my various attempts in generic mode proved unsuccessful. The appearance obtained was really not good, which is logical. I then opted to calculate the texture using the average luminosity of the images in the zones under consideration. The result was still very unbalanced (as there are many more images in certain areas of the comet) and I had to go back and retouch the UV map (32768 x 32768 pixels) in DXO to unblock and increase the dark areas, and fold and decrease the areas with the “brightest” average. I adjusted the gray levels of the texture by balancing the histogram of luminosities to obtain a Gaussian curve distribution centered around half the maximum power.

I judged the resulting uvmap texture to be satisfactory enough to move on to the phase of creating anaglyptic red-cyan relief video animations. Here's a path around this 3D model - some of the relief scenes are truly spectacular and breathtaking. You'll discover incredible structures and reliefs on this extraterrestrial body. The model covers the entirety of the comet, which is an object measuring 4.3 x 2.6 x 2.1 km. This 4K relief film highlights the diversity of surface features, with peaks, arrows, circular erosion depressions, dunes, cliffs, escarpments, scree, pits, wells, fractures and faults, consolidated and unconsolidated terrains, polygonal networks, transport and deposits—in short, all the mineral and sidereal beauty of the comet. I advise you to watch it directly with the YouTube application on your 4K TV, in the evening in the dark and with the sound turned up to a powerful bass to fully appreciate Scott Buckley's music. For the anaglyph version in 3D, the high compression on YouTube leads to the appearance of colored “ghosts”. An anaglyph relies on the transport of relief via colors, but compression slightly alters these colors and details, creating inconsistencies for the brain. This can cause vibrations in certain parts of the image, or make them uncomfortable to look at. It's also crucial to use glasses that correctly filter red for the left eye and cyan for the right, and to ensure that light sources, such as TV LEDs, emit exclusively red for R, green for V and blue for B (a V element that emits red or an R that emits green contributes to crosstalk):

The 3D anaglyph red-cyan video is here: https://youtu.be/ztZ2d3laOQA

The 2D video is here, but the stereoscopic 3D version is far more impressive: https://youtu.be/RJYOGgD4bo4

If you want to learn more about the evolution of the comet's surface morphology, you can read this scientific article: “Surface Morphology of Comets and Associated Evolutionary Processes: A Review of Rosetta's Observations of 67P/Churyumov-Gerasimenko”.
https://doi.org/10.1007/s11214-019-0602-1

My current regret is that Metashape's rendering engine can't calculate raytracing or shadow mapping for animations. I dream of being able to produce videos with the calculation of shadows cast by the Sun. I've tried to import all 132 million facets into Blender, but it's not developed for that many facets and crashes. The Blender development team is aware of this, as I've opened a github ticket about it.

Conclusion

Many thanks and congratulations to the Metashape design and development team. And in particular to Alexey Pasumansky for his interactivity on the forum, the support and his great competence.

Our research is part of recent efforts that recognize the value of stereophotography as a tool for visualizing and characterizing the surface of solar system bodies at spatial scales and dimensions not generally achieved by digital terrain models. It is also in line with the growing interest in underground access points and their use as a means of probing the interior of these bodies. We have shown for the first time that a cometary nucleus, namely that of comet 67P/Churyumov-Gerasimenko, features cavities that undoubtedly meet the criteria for SAPs (Subsurface Access Points). Similarly, we have shown for the first time a link between the brief insolation of the bottom of an icy cavity and the departure of a large transient jet.

Kind regards, David ROMEUF.
« Last Edit: October 18, 2024, 10:15:50 PM by sonno »

sonno

  • Newbie
  • *
  • Posts: 34
    • View Profile
Re: Metashape in space, example with comet 67P Churyumov-Gerasimenko
« Reply #1 on: October 16, 2024, 12:50:34 PM »
I also made this modest model of the asteroid Lutetia as the ROSETTA probe flew over it on its way to comet 67P Tchouri.

https://youtu.be/aHaI2knmVHo

And this of the asteroid Ryugu :

https://youtu.be/M4nr6yzb0hA
« Last Edit: October 17, 2024, 11:45:36 AM by sonno »

Mak11

  • Sr. Member
  • ****
  • Posts: 387
    • View Profile
Re: Metashape in space, example with comet 67P Churyumov-Gerasimenko
« Reply #2 on: October 16, 2024, 06:05:36 PM »
Nice work!

sonno

  • Newbie
  • *
  • Posts: 34
    • View Profile
Re: Metashape in space, example with comet 67P Churyumov-Gerasimenko
« Reply #3 on: October 26, 2024, 01:08:11 PM »

In my experience report, I completely forgot to mention what probably interests me the most when used in combination with Metashape, which is the production of stereoscopic documents, specifically the stereo restitution of the 3D models we create. A 3D movie is far more impressive than a flat 2D movie without volume. The immersion into our 3D models is complete; you can grasp the volumes, sharp angles, and all the geometric nuances of a surface... It’s worth remembering that our brain perceives space in relief. We are always aware of volumes, and space is not a flat plane.

Metashape offers stereoscopic functions, which I used for the relief version of my videos. In the menu Tools -> Preferences -> Stereoscopic Display, you can start by selecting the stereo restitution mode and parallax. I used the red-cyan anaglyph mode, which is likely the most familiar to the public and the easiest to distribute, as it doesn’t require expensive specialized equipment—just glasses with filters that are easy to acquire at a low cost. In my case, I didn't risk the problem of color rivalry because my subject, the comet, is in black and white. Anaglyph works perfectly with black and white. Indeed, a cyan-leaning object cannot pass through the red filter (left eye), and a red-leaning object cannot pass through the cyan filter (right eye). Therefore, they are only seen by one eye, becoming monocular among other stereoscopic elements, which is unnatural. They vibrate in the image, which is very unpleasant. A stereoscopic inconsistency is introduced, however, due to YouTube’s heavy compression. Since the brain’s neural network recreates the perception of depth from the difference between the left and right images, any unnatural alterations between these images will feel uncomfortable to our vision.

Our eyes perceive a scene from two slightly different angles, horizontally offset. This difference in angle between the eyes, called parallax, is what our brain uses to estimate depth. Stereoscopy simulates this parallax by presenting two offset images. I used a parallax of 0.4, or 1.1°, and 0.7, or 2.0°. When shooting in stereoscopy, the choice of parallax or the stereo baseline in meters is guided by the distance to the nearest object, which will appear in the foreground in the relief image, as well as the distance of the furthest object, known as infinity, to avoid diverging our eyes by more than 1° when projecting the document (we look at objects at infinity with parallel eyes), and by the stereo restitution dimension on the screen. For example, 2° or 1/30 of 1.9° is often used for objects in the foreground, the nearest at 2 meters, with a 35mm focal length for display on a computer screen or TV. The common value of 1.9-2.0° corresponds to our eye spacing of 65mm for an object 2m away. This is the historical 1/30 rule used for 36mm film with a 35mm lens, which causes a parallax shift of 1.2mm on the film, or 3.4% of the film's width, and thus 3.4% of the projected image if the operator hasn’t cropped the captured image.

However, 2° for screen projection in a cinema is far too taxing for the brain, impossible to merge comfortably, and in these conditions, parallax needs to be closer to 1° (or even less; it all needs to be calculated). I was fortunate to be able to project these films calculated with Metashape at 1.1° parallax on a large dome screen in a planetarium with a 22m diameter, with a stereo restitution dimension around 10m, and the effect was incredible and mind-blowing as the comet was above us, and after a while, it felt like being next to it in a spaceship worthy of the most stunning science fiction movies. Be careful: in stereoscopy, there’s usually only one seat in the room that’s ortho-stereoscopic for the viewer (without distortion). I covered all these considerations for calculating parameters while keeping viewers’ visual comfort and our physiological limits in mind in this article:

https://www.david-romeuf.fr/3D/CalculsStereophotographie/CalculsStereophotographie.html


Now let's get to the most complex part from my perspective, which is the camera path, the animation path in Metashape. The software allows numerous settings on perspective, duration, pixel resolution, frame rate, field of view in degrees, pre-calculated horizontal or vertical paths around the bounding box, yaw rotation angles, and more.

For the camera path, the flyover tour, I mainly used sequences of 3 to 5 waypoints with the smooth camera track feature. And most importantly, for a bilobed object like comet 67P, I was very careful to avoid generating any “stereoscopic window violations.” Window violations are the most common mistake for beginners in stereoscopy. We generally aim for the object to pop out in front of the display screen, as the effect is more impressive for the viewer. This effect works well for the brain as long as it’s not placed in a situation it never naturally encounters.

In fact, stereo restitution has a window—it’s like looking through an airplane or spacecraft porthole or through a window of a building. In this natural situation, each of our eyes doesn’t see quite the same thing at the left and right edges of the porthole, which is the stereoscopic window. Therefore, one should avoid producing a montage or stereoscopic document that artificially shows what is hidden by the left and right edges of the porthole or window... Otherwise, the brain doesn’t know where to place the object; there’s ambiguity about the object’s position since this never happens naturally. A stereoscopic window violation is very unpleasant for the viewer, especially with a still image; the effect is less noticeable in a dynamic video, especially if attention is drawn to the center of the screen where there is no artificial violation.

On a screen, the stereoscopic window is simply the frame or the boundaries of the displayed document. More simply, the rule is to NEVER CUT an object that is popping out in front of the screen’s stereoscopic window at the left and right edges. The problem is less sensitive at the top and bottom edges of the porthole or window because, quite simply, our eyes have horizontal, not vertical, parallax. I discussed this major issue in stereoscopy in this article (illustration D):

https://www.david-romeuf.fr/3D/Anaglyphes/MontageFenetreVolume/AnaglyphAssemblyWindowsVolume.html

Since the stereoscopic base/parallax of Metashape's stereoscopic camera is fixed—that is, you cannot vary the base/distance between the two cameras along the path (as we do with our eyes)—the restitution may be frustrating for the stereophotographer wanting to create a significant depth effect throughout the sequence. Metashape's stereoscopic cameras remain parallel and do not shoot in convergence (which should be avoided due to distortion).

In fact, you must set the position of the stereoscopic window once and for all for the sequence by right-clicking on the object -> Center view. This point must be chosen on the object at the ideal location to comply with the rules of stereoscopy. That means if the sequence starts 10 km from the comet with a 2° parallax, the effect will be fairly flat and similar to what we see with our eyes spaced 65 to 70 mm apart. (If you wanted the same depth effect on both the near and far object, you would need to adjust the parallax at the beginning and end of the camera path, as in hyper-stereophotography with a base of, say, 100 m to 1 km between the two cameras for a distant mountain range or even the diameter of the Earth to attempt lunar relief!). The object won’t pop out of the screen but will remain in the background. Even if the object cuts the left or right edge of the window, it won’t be in violation. The closer the path gets to the comet, the more noticeable the depth effect on it becomes. The depth effect will be maximal at the closest point, 100 m away, at the end of the camera path. But throughout the journey, be careful not to cut parts of the objects that would then be in a popping-out position at the left and right edges of the generated video’s boundaries. Depending on the camera path and the shapes of the objects considered, it’s not necessarily easy to always follow stereoscopy rules. For example, I have a few window violations in the sequence passing through the neck of 67P, at the lower left on all the collapse pits. I didn’t find these stereoscopic errors to be too uncomfortable in this sequence. I always test the sequence before rendering it in high resolution. Metashape’s functionality is very useful for this. I’ve had to modify and adjust my trajectories often because they introduced stereoscopic errors.

Feel free to reach out if I can assist. Sincerely, David.


spader

  • Newbie
  • *
  • Posts: 11
    • View Profile
Re: Metashape in space, example with comet 67P Churyumov-Gerasimenko
« Reply #4 on: October 27, 2024, 10:12:16 PM »
Hi David,

Id just like to reach out and thank you sincerely for writing this up. Its really interesting. Keep up the good work, and please give us a heads up on the forum going forwards with your research.