Dear colleagues from the Metashape user community,
On the occasion of the passage of comet C/2023 A3 (Tsuchinshan-ATLAS), I'd like to share an experience of using Metashape to study the nucleus of a comet.
This work with Metashape led to the publication of an article in the scientific journal
Monthly Notices of the Royal Astronomical Society (MNRAS), Volume 531, Issue 2, June 2024, Pages 2494-2516,
https://doi.org/10.1093/mnras/stae1290 “Detection and characterization of icy cavities on the nucleus of comet 67P/Churyumov-Gerasimenko” by Philippe Lamy, Guillaume Faury, David Romeuf, Olivier Groussin.
This study represents the first confirmation and characterization of SAPs (Subsurface Access Points) on a comet, i.e. cavities 20 to 47 m deep, with evidence of the presence of water ice. These cavities are areas of interest for a potential space probe that could gain direct access to the subsurface materials. We have also correlated the departure of a transient jet, linked to insolation at the bottom of one of these icy cavities, thanks to images and a 3D model associated with a thermal model.
I invite you to take a look at the short videos that accompany our article,
but the main video is at the end of this message :
Visit of four cavities on the nucleus of comet 67P/Churyumov-Gerasimenko
- 3D anaglyph red-cyan version:
https://www.youtube.com/watch?v=twsfRI52HZw- 2D version:
https://www.youtube.com/watch?v=ADDyB76qmBIVizualisation of the cavity jet activity of the nucleus of comet 67P
- 3D anaglyph red-cyan:
https://www.youtube.com/watch?v=3z_W9npxTl4 - 2D VERSION:
https://www.youtube.com/watch?v=VyRSdKIXY-QHere are a few tips for watching such a document in anaglyptic depth relief: It's best to watch in a dark room with no light reflections on the TV screen, and to stand 2 or 3 meters from the screen, well in the middle. You need well-filtered red-cyan glasses to separate the right/left eye channels, and a TV whose red R colors don't emit a little Green+Blue=Cyan and/or whose green V doesn't emit a little red (otherwise the TV injects the left image into the right eye, or/and it injects the right image into the left eye -> stereo incoherence for the brain, it “vibrates”), and finally turn up the sound and get good bass for the music I've selected for this videos.
Since the arrival of the ROSETTA probe near the comet and the first images from the OSIRIS cameras (August 2014), we've been working on creating stereoscopics views of comet 67P. This red-cyan anaglyph archive is available on the website of the French space agency, CNES
https://rosetta-3dcomet.cnes.fr/?q=en/albums/yes . In particular Guillaume Faury (IRAP), who made the stereophotographic archive with me, and who is in fact the discoverer of Philae. Guillaume Faury had located Philae as early as March 2015 (and published in June 2015) by comparing images as a characteristic new very bright spot, well before the September 2, 2016 confirmation flyby at very low altitude (2.7 km) to fully resolve it. As these are real digital CCD photographs taken in 2 or more stages with the extraordinary OSIRIS NAC and WAC CCD cameras, the result of the stereo-reproduced pairs is sometimes incoherent for our brains, as the comet rotates on itself and the shadows cast by the relief on its surface change and do not cover exactly the same area on the images for the left and right eyes (which is incoherent for our brains, as in real life, they receive information from the left and right eyes at the same time, synchronized but just 7 cm apart). Inconsistent areas of stereo restitution vibrate when the anaglyph is observed. The ROSETTA mission was not designed for stereophotography. We are looking for image pairs in the programmed series that correspond to best practice in stereophotography, i.e. a parallax of around 2° for foreground areas for stereo-reproduction on a computer screen or 4K TV (more like 1° for a large-screen cinema). It's the parallax between the 2 eyes that enables the brain to restore volume and depth through the differences between the 2 images received by our neural network. We have 2 eyes to be very precise when working with our hands, i.e. for our close-up actions (plug up one eye and try a precision action like playing house of cards).
Our team of astronomers is working on the astrophysical exploitation of comet images to understand their structure, relief and evolution, dynamics, etc. The 3D reconstruction of small bodies is fundamental to the study of their physical properties, starting with their volume, which gives access to their density and thus to the formation processes. The study of their surfaces (and those of planets) and evolutionary processes requires quantitative data (height of cliffs, depth of basins, craters, wells, etc.). This is particularly the case for thermal modeling (energy received and integrated by the body) to determine lighting conditions. From an operational point of view, placing a vehicle on these surfaces (site selection) or operating a rover requires 3D reconstruction and stereo vision. To understand the overall shape of small bodies, we need to: understand accretion and evolution processes, know the gravity field (navigation aid). Geomorphological studies of terrestrial planets, asteroids and cometary nuclei require: a better understanding of the geological processes involved, information on internal structure, localized morphological studies at high spatial resolution, understanding of global and local topography, particularly slopes.
It's obvious to most of you who are land surveyors, civil engineers, builders, archaeologists... but it's far from easy to determine a scale, measure precise heights, precise distances, and even locate the highest and lowest areas on a 2D shot if you don't already know the area and if you only have a few shots all taken from the same point of view. You need to integrate perspective, you need to find the right scale... Fortunately, great photogrammetric tools like Metashape are here to help!
With 2D imagery, it may even be possible to fail to recognize pits, cavities, caves and small structures, depending on the angle of view and the light threshold of the shots. This is the story of our discovery of icy cavities thanks to stereo photographic images from our CNES archive, then of their precise characterization thanks to the creation with Metashape of my high-resolution 3D model of the surface (132 million facets), of the correlation of the activity of one of the icy cavities with a jet of transient matter.
The presence of cavities or caves on SAP (Subsurface Access Points) solar system objects was envisaged but not confirmed on comets (see
Planetary Caves: A Solar System View of Processes and Products J. Judson Wynne & al https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2022JE007303 ). We were able to locate 3 very small ones thanks to our stereophotographic archive. We were able to describe and characterize them thanks to the photogrammetric model produced with Metashape (see our MNRAS article available online). Better still, this 3D model was placed in a simulated orbit with the correct orientation of its axes of rotation, its angular velocity of rotation on itself... Using a thermal model and integration of the energy received at the bottom of the icy cavity, we correlated the departure of a transient jet. We calculated the energy accumulated over a full orbital revolution of the comet received by each facet of the three cavities and surrounding terrain was calculated 36 times per rotation (every 1240 s) to track short-term diurnal variations, and at 35 different times along the orbit separated by a variable time interval (from 100 days at aphelion to 15 days at perihelion) to track long-term seasonal variations. At each time step, the distance to the Sun, the orientation of each facet with respect to the Sun and the projected shadows were calculated using OASIS software (Jorda et al. 2010). As solar insolation is independent of surface composition, water ice sublimation was neglected in this model.
Let's turn now to the creation of the 67P model in Metashape :As you know, there are best practices in photogrammetry, including shot placement, coverage, overlap, cross-trajectories and uniform, diffuse lighting conditions. However, the image archive from the ROSETTA mission does not always meet these optimal standards. Indeed, the images show many moving shadows and sharp contrasts, as space is lit only by the Sun, with no diffuse light source. Coverage around the comet is also uneven, with some areas never or very rarely illuminated, such as the bottom of deep circular troughs. What's more, the images are taken with different filters (IR, red, orange, blue, UV), resulting in variations in light profiles and contrasts. For example, the small lobe of the comet may cast a shadow on the large lobe. This poses a number of challenges when aligning images in the software.
During the ROSETTA mission, several series of images were dedicated to the creation of a 3D model, which resulted in a 40-million-facet model “
The global meter-level shape model of comet 67P/Churyumov-Gerasimenko ” , A&A 607, L1 (2017) F. Preusker & al
https://www.aanda.org/articles/aa/full_html/2017/11/aa31798-17/aa31798-17.html . This team of astronomers had used on the order of 1500 images over 18 months to produce a very good 3D model of the comet, but on which our cavities did not appear. I therefore decided to try to use all the OSIRIS images from the mission to obtain a more detailed model of its surface. I pre-processed, prepared and selected 9186 images for the post-perihelion model and 8553 for the pre-perihelion model. In fact, I was able to calculate 2 models: before and after the passage to the closest point to the Sun, perihelion, which now makes it possible to highlight and quantify the areas that have changed a great deal, most of them affected by solar erosion. Our team is preparing a new publication on the subject for 2025-2026.
The model took a year to complete, with tests, disappointments, successes and great satisfaction. To get the best results, I had to zone out all the shadows cast on the surface and exclude them from the calculation on all the images. Another major problem was the comet's activity. The alignment procedure found SIFT key points in its coma “atmosphere”, with ejecta, jets, background stars... which obviously move and can't be very reliable reference points. Key point and Tie point filtering was crucial in this project for the alignment of over 18500 images loaded into the two pre- and post-perihelion chunks.
The results of the first image alignment contained several "comets" (several solutions of identified objects when there was only one) which had to be separated and isolated in other chunks (series of images). I aligned all the pieces separately, then merged and reassembled them into 1 single chunk.
I began by manually deleting the most grossly wrong Tie Points, iterating with the alignment recalculated by the optimization function. In a second phase, I used Metashape's mathematical filters “reconstruction uncertainty” < 75, “reprojection error” < 0.375 and “projection accuracy” < 10. Even after this filtering and optimization iterations, the tie-point comet object was surrounded by a few tie points above its surface. These were most likely low-level jets and ejecta, and a few artifacts. I didn't have much of a solution left with the mathematical filters, as I was afraid of removing too many tie points for the rest of the workflow and creating holes. I therefore proceeded to manually remove these tie points from the entire surface of the comet (like a barber shaving a face). This last operation was undoubtedly the most time-consuming, involving numerous manipulations all around the comet, to check it from all ortho and perspective angles and select the tie points I considered unreliable for deletion.
After all this image selection and separation, I finally used 7682 NAC images and 1504 WAC images, i.e. 9186 images, for the post-perihelion model. For the pre-perihelion model, I used 7835 images from the NAC camera and 718 images from the WAC camera, i.e. 8553 images. As there are far more post-perihelion images taken close to the comet, the post-perihelion model contains far more fine details of the comet surface.
You can find many illustrations of the various stages of this work in the PDF of my presentation at the Marseille observatory in early June 2024: here
https://proam-gemini.fr/photogrammetrie-reconstruction-et-visu-3d-de-67p/ (in French).
To illustrate the result of the alignment, the following video shows a trip through the positrons of the shots of all the images, in different orbits of the probe.
https://youtu.be/7aAs042FbZE Once all the images had been aligned to the best of my ability, I launched the calculation of the dense cloud, and above all the depth maps, by pushing the default values of the Metashape configuration tweaks.
Once the depth maps had been calculated, I ran the mesh calculation to finally obtain 132 million facets for the post-perihelion model. In fact, the result was 133 million, but I hadn't seen another slag object inside the comet since the alignment. I was able to isolate and remove it using the Metashape function: Model -> Gradual selection -> Connected component size -> 99% .
I was able to geo-reference my post-perihelion 3D model to Preusker 2017's SPG SHAP7 marker, identifying numerous markers between the 2 models. Then I aligned my pre-perihelion model with the post-perihelion model using the SIFT key-point method.
Once the 3D mesh model had been obtained and optimized, I had to move on to the texture calculation phase. Given the number of images obtained under very different lighting conditions, my various attempts in generic mode proved unsuccessful. The appearance obtained was really not good, which is logical. I then opted to calculate the texture using the average luminosity of the images in the zones under consideration. The result was still very unbalanced (as there are many more images in certain areas of the comet) and I had to go back and retouch the UV map (32768 x 32768 pixels) in DXO to unblock and increase the dark areas, and fold and decrease the areas with the “brightest” average. I adjusted the gray levels of the texture by balancing the histogram of luminosities to obtain a Gaussian curve distribution centered around half the maximum power.
I judged the resulting uvmap texture to be satisfactory enough to move on to the phase of creating anaglyptic red-cyan relief video animations. Here's a path around this 3D model - some of the relief scenes are truly spectacular and breathtaking. You'll discover incredible structures and reliefs on this extraterrestrial body. The model covers the entirety of the comet, which is an object measuring 4.3 x 2.6 x 2.1 km. This 4K relief film highlights the diversity of surface features, with peaks, arrows, circular erosion depressions, dunes, cliffs, escarpments, scree, pits, wells, fractures and faults, consolidated and unconsolidated terrains, polygonal networks, transport and deposits—in short, all the mineral and sidereal beauty of the comet. I advise you to watch it directly with the YouTube application on your 4K TV, in the evening in the dark and with the sound turned up to a powerful bass to fully appreciate Scott Buckley's music. For the anaglyph version in 3D, the high compression on YouTube leads to the appearance of colored “ghosts”. An anaglyph relies on the transport of relief via colors, but compression slightly alters these colors and details, creating inconsistencies for the brain. This can cause vibrations in certain parts of the image, or make them uncomfortable to look at. It's also crucial to use glasses that correctly filter red for the left eye and cyan for the right, and to ensure that light sources, such as TV LEDs, emit exclusively red for R, green for V and blue for B (a V element that emits red or an R that emits green contributes to crosstalk):
The 3D anaglyph red-cyan video is here:
https://youtu.be/ztZ2d3laOQAThe 2D video is here, but the stereoscopic 3D version is far more impressive:
https://youtu.be/RJYOGgD4bo4 If you want to learn more about the evolution of the comet's surface morphology, you can read this scientific article: “
Surface Morphology of Comets and Associated Evolutionary Processes: A Review of Rosetta's Observations of 67P/Churyumov-Gerasimenko”.
https://doi.org/10.1007/s11214-019-0602-1My current regret is that Metashape's rendering engine can't calculate raytracing or shadow mapping for animations. I dream of being able to produce videos with the calculation of shadows cast by the Sun. I've tried to import all 132 million facets into Blender, but it's not developed for that many facets and crashes. The Blender development team is aware of this, as I've opened a github ticket about it.
ConclusionMany thanks and congratulations to the Metashape design and development team. And in particular to Alexey Pasumansky for his interactivity on the forum, the support and his great competence.
Our research is part of recent efforts that recognize the value of stereophotography as a tool for visualizing and characterizing the surface of solar system bodies at spatial scales and dimensions not generally achieved by digital terrain models. It is also in line with the growing interest in underground access points and their use as a means of probing the interior of these bodies. We have shown for the first time that a cometary nucleus, namely that of comet 67P/Churyumov-Gerasimenko, features cavities that undoubtedly meet the criteria for SAPs (Subsurface Access Points). Similarly, we have shown for the first time a link between the brief insolation of the bottom of an icy cavity and the departure of a large transient jet.
Kind regards, David ROMEUF.