Advancing precision oncology through tumor digital twins: A versatile ViT-determined margin-consistent model for lung adenocarcinoma histopathologic subtyping in hematoxylin-eosin images

Presenter: Bardia Rodd, PhD Session: Digital Pathology 3 Time: 4/21/2026 9:00:00 AM → 4/21/2026 12:00:00 PM

Authors

Meghdad Sabouri Rad 1 , Mohammad Mehdi Hosseini 1 , Muhammad Hassaan Khalid 2 , Saverio J. Carello 1 , Michel R. Nasr 1 , Rossana Kazemimood 3 , Ola El-Zammar 1 , Bardia Rodd 1 1 SUNY Upstate Medical University, Syracuse, NY, 2 University of Texas Health Science Center at Houston, Houston, TX, 3 Pathology, University of Texas Health Science Center at Houston, Houston, TX

Abstract

Background: Digital twin frameworks in oncology require a stable, patient-specific histologic representation of tumor architecture. However, current LUAD subtyping models are vulnerable to stain and scanner variability, domain shift, and poor probability calibration. We investigate whether a state-space/Transformer hybrid with attention and margin-aware training can generate a robust, calibrated “morphology twin” suitable for integration into future tumor digital-twin systems. Methods: A uniform patch-aggregation WSI pipeline was implemented with identical tiling, Macenko normalization, augmentations, and optimization across all backbones. From 143 FFPE H&E WSIs in BMIRDS-LUAD, we extracted 203,226 tissue patches (224×224 at 20×). Patches were encoded using ResNet50/101, ViT-L, or a state-space/Transformer hybrid (MambaVision). Slide-level predictions were produced via gated-attention MIL and a linear classifier whose logit gaps define decision margins. Training employed cross-entropy combined with a supervised representation term, without handcrafted harmonization or test-time adaptation. Internal development used a WSI-stratified split across five LUAD growth patterns. Zero-shot external evaluation used WSSS4LUAD. Endpoints included accuracy, subtype-specific AUC, feature-margin concordance, internal-to-external performance drop, calibration (ECE/Brier), and run-to-run variability across 10 seeds. Results: On BMIRDS-LUAD, MambaVision+attention achieved 96.40±3.32% accuracy with ROC-AUC ≥0.99 across all subtypes and strong feature-margin alignment (τ=0.88 train / 0.64 validation). Errors were largely confined to mixed-pattern or low-quality slides. Zero-shot transfer to WSSS4LUAD yielded 83.69±7.76% accuracy—the smallest performance drop (−12.71 points) among all backbones. Calibration improved in-site and out-of-site (ECE/Brier: 0.043/0.098 internal; 0.087/0.154 external), with statistically significant gains over ResNet and ViT baselines. Variability across seeds narrowed (3.32% vs 6.12% for ResNet50), indicating enhanced training stability. Conclusions: This framework addresses key LUAD subtyping failure modes—cross-site instability, overconfident boundary errors, and limited reproducibility. By integrating state-space modeling, Transformer attention, and margin-consistent learning, it produces a calibrated morphology twin that preserves subtype discriminability under domain shift and provides more trustworthy WSI-derived probabilities. While not a complete cancer digital twin, it forms a robust histologic module for integration into next-generation multimodal and temporal LUAD digital-twin systems.

Disclosure

M. Khalid, None.. R. Kazemimood, None.. B. Rodd, None.

Cited in


Control: 5259 · Presentation Id: 3125 · Meeting 21436