X-ray tomography offers scientists and engineers a powerful way to non-invasively explore the inner workings of objects in 3D. Whether it’s peering inside advanced battery materials or inspecting the layers of a computer chip, the method uses the same fundamental principles as a medical CT scan. As the object rotates, X-ray images are taken from various angles, and advanced software reconstructs a 3D view of its internal structure.
But achieving nanoscale resolution, which is necessary for revealing tiny features on microchips, requires far greater precision than a typical medical scan. We're talking about a resolution that's roughly 10,000 times finer.
At the Hard X-ray Nanoprobe (HXN) beamline, part of the National Synchrotron Light Source II (NSLS-II) at Brookhaven National Laboratory, scientists are pushing those boundaries. NSLS-II, a US Department of Energy (DOE) Office of Science user facility, generates X-rays more than a billion times brighter than those used in conventional CT scans, enabling the level of detail needed for cutting-edge materials research.
Tomography relies on collecting projection images from a wide range of angles. But in many real-world applications, that’s easier said than done.
Take flat computer chips, for instance. You can’t rotate them a full 180 degrees without blocking some X-rays. When X-rays strike at steep angles, fewer penetrate the chip, restricting the number of usable views.
This angular limitation creates a data gap known as the “missing wedge,” which causes blurry or distorted reconstructions.
For decades, this problem has limited the applications of X-ray and electron tomography in many areas of science and technology.
Hanfei Yan, Study Corresponding Author and Lead Beamline Scientist, HXN Beamline, Brookhaven National Laboratory
To address this challenge, researchers at NSLS-II developed a new method that goes by the name of Perception Fused Iterative Tomography Reconstruction Engine (PFITRE). This approach combines physical modeling with artificial intelligence (AI), using a convolutional neural network (CNN) trained on simulated data.
CNNs are AI models that automatically learn to recognize patterns in data. They excel at identifying key image features, such as edges, textures, and shapes, and combining them to form accurate predictions. In PFITRE, the AI contributes perceptual insight into what the image should look like, while a physics-based model ensures those results remain scientifically valid.
The process iterates until both models agree, yielding a 3D reconstruction that’s both visually sharp and grounded in real measurements.
These findings were recently published in npj Computational Materials.
Better Vision Requires Extensive Training
Unlike consumer image correction tools, scientific imaging demands both visual clarity and data fidelity. Simply making the image look better isn’t enough. To ensure scientific accuracy, the team embedded the AI into an iterative solver (a mathematical engine that refines solutions step by step). The AI serves as a “smart” regularizer, guiding corrections without veering away from the actual X-ray data.
We didn’t want an AI that just makes better images. We wanted an AI that works hand-in-hand with physics, so that the results are both visually clear and scientifically trustworthy. That’s the power of our method – combining the sophistication of AI with the physical model to ensure fidelity.
Chonghang Zhao, Study Lead Author and Postdoc, Brookhaven National Laboratory
PFITRE’s neural network is built on a U-net architecture, a common encoder-decoder model used for image processing. The encoder extracts essential features from the input image, while the decoder reconstructs the image with improved detail. To better handle the missing wedge issue, the team enhanced the U-net with residual dense blocks and dilated convolutions, allowing it to analyze both fine and large-scale structures.
But training such a model requires a lot of data.
As real microscopy datasets are limited, the team created synthetic training data using natural images, simulated patterns, and scanning electron microscope (SEM) images of circuits. They also built a digital twin of the experiment to simulate real-world imperfections like noise, misalignment, and detector issues so the AI could learn to handle the kinds of challenges it would face during actual experiments.
Impacting the Future of Imaging
While PFITRE is still being refined, its impact is already clear. Researchers can now extract meaningful data from samples that were previously off-limits due to size or geometry. The method broadens the field of view and reduces the influence of the missing wedge, making it possible to analyze more of a sample without compromising resolution.
It also holds promise for faster in situ experiments, requiring fewer measurements and reducing radiation exposure, which is particularly important for sensitive samples.
This opens the door to detailed imaging of samples that couldn’t be studied before. That’s a very big step forward. Whether it’s diagnosing defects in microchips or understanding why a battery degrades, PFITRE allows us to see details under conditions that were previously considered infeasible.
Hanfei Yan, Study Corresponding Author and Lead Beamline Scientist, HXN Beamline, Brookhaven National Laboratory
The researchers note there’s still work to be done.
Right now, PFITRE processes 3D structures one 2D slice at a time. Expanding to a fully 3D model would boost consistency but also increase computational demands. There’s also a need to expand the training dataset to include more types of artifacts, like faulty pixels or sample drift, and to develop methods that allow the model to learn effectively with less data.
This new AI-assisted 3D imaging technique could accelerate discoveries in fields ranging from semiconductor development to material synthesis and even biomedical research. As machine learning continues to evolve alongside advanced imaging platforms like NSLS-II, tools like PFITRE will help scientists explore the microscopic world in greater detail than ever before.

This 3D image of an integrated circuit shows slices through its thickness. The figure compares results from three datasets: one created using the full set of angles, one reconstructed with a new technique called the perception fused iterative tomography reconstruction engine (PFITRE) method, and one using today's "gold standard" method, fast iterative shrinkage-thresholding algorithm (FISTA). Video Credit: Brookhaven National Laboratory.
Journal Reference:
Zhao, C., et al. (2025) Limited-angle x-ray nano-tomography with machine-learning enabled iterative reconstruction engine. npj Computational Materials. DOI: 10.1038/s41524-025-01724-0. https://www.nature.com/articles/s41524-025-01724-0.