XSPLAIN: XAI-enabling Splat-based Prototype
Learning for Attribute-aware INterpretability

Dominik Galus1, Julia Farganus1, Tymoteusz Zapała1, Mikołaj Czachorowski1,
Piotr Borycki2, Przemysław Spurek2,3, Piotr Syga1
1 Wrocław University of Science and Technology 2 Jagiellonian University 3 IDEAS Research Institute
XSPLAIN Teaser

Figure 1. XSPLAIN provides ante-hoc, prototype-based explanations for 3D Gaussian Splat classification.

Abstract

3D Gaussian Splatting (3DGS) has rapidly become a standard for high-fidelity 3D reconstruction, yet its adoption in critical domains is hindered by the lack of interpretability. While explainability methods exist for point clouds, they typically rely on ambiguous saliency maps that fail to capture the volumetric coherence of Gaussian primitives.

We introduce XSPLAIN, the first ante-hoc, prototype-based interpretability framework designed specifically for 3DGS classification. Our approach leverages a voxel-aggregated PointNet backbone and a novel, invertible orthogonal transformation that disentangles feature channels for interpretability while strictly preserving decision boundaries. Explanations are grounded in representative training examples, enabling intuitive "this looks like that" reasoning without degradation in classification performance. A rigorous user study (N=51) demonstrates a decisive preference for our approach (p < 0.001) over existing post-hoc methods.

Qualitative Results

XSPLAIN provides explanations by identifying coherent volumetric regions that drive the classification decision. Unlike traditional methods that rely on abstract saliency maps, our framework grounds its reasoning in geometry.

The animation below demonstrates the "looks like that" reasoning process. For a given input object, XSPLAIN isolates specific disentangled attributes (e.g., engines, wings, or wheels) and retrieves the most similar prototypes from the training set that share these geometric features.

XSPLAIN Method Animation
XSPLAIN Method Animation
XSPLAIN Method Animation

Dynamic visualization of XSPLAIN. The model highlights specific parts of the query object (left) and matches them with semantically corresponding regions in training prototypes (right), validating the attribute-aware interpretability.

Methodology

XSPLAIN operates in two stages to decouple classification performance from interpretability:

Architecture Diagram

Figure 2. Overview of the XSPLAIN architecture: A) Classification Backbone, B) Disentangling Module, C) Prototype-based explaining.

User Study (N=51)

In a blinded A/B/C test against LIME and PointSHAP, participants significantly preferred XSPLAIN explanations.

Metric LIME PointSHAP XSPLAIN (Ours)
Preference (Best Method) 18% 33% 49%
High Confidence in Model 23% 31% 46%

Citation

@misc{galus2026xsplain,
  title={XSPLAIN: XAI-enabling Splat-based Prototype Learning for Attribute-aware INterpretability},
  author={Dominik Galus and Julia Farganus and Tymoteusz Zapala and Mikołaj Czachorowski and Piotr Borycki and Przemysław Spurek and Piotr Syga},
  year={2026},
  eprint={2602.10239},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}