Identification and detection of thin-cap fibroatheroma (TCFA) from intravascular optical coherence tomography (IVOCT) images is critical for treatment of coronary heart diseases. Recently, deep learning methods have shown promising successes in TCFA identification. However, most
...
Identification and detection of thin-cap fibroatheroma (TCFA) from intravascular optical coherence tomography (IVOCT) images is critical for treatment of coronary heart diseases. Recently, deep learning methods have shown promising successes in TCFA identification. However, most methods usually do not effectively utilize multi-view information or incorporate prior domain knowledge. In this paper, we propose a multi-view contour-constrained transformer network (MVCTN) for TCFA identification in IVOCT images. Inspired by the diagnosis process of cardiologists, we use contour constrained self-attention modules (CCSM) to emphasize features corresponding to salient regions (i.e., vessel walls) in an unsupervised manner and enhance the visual interpretability based on class activation mapping (CAM). Moreover, we exploit transformer modules (TM) to build global-range relations between two views (i.e., polar and Cartesian views) to effectively fuse features at multiple feature scales. Experimental results on a semi-public dataset and an in-house dataset demonstrate that the proposed MVCTN outperforms other single-view and multi-view methods. Lastly, the proposed MVCTN can also provide meaningful visualization for cardiologists via CAM.
@en