Region-to-boundary deep learning model with multi-scale feature fusion for medical image segmentation

2022 
Abstract Accurately locating and segmenting lesions, organs, and tissues from medical images are necessary prerequisites for disease diagnosis, monitoring, and treatment planning. Semantic segmentation refers to the classification of each pixel/voxel in two-dimensional or three-dimensional space, which is beneficial to clinical parameter measurement and disease diagnosis. Due to the diversity of features such as size, shape, location, and intensity, segmenting lesions or organs from medical images has always been a challenging worldwide topic. Especially for low-contrast medical images, boundary recognition is particularly difficult. In this paper, we propose a novel region-to-boundary deep learning model to provide a feasible solution to alleviate this problem. First, we use a U-shaped network with two branches behind the last layer, one of which generates the target probability map, and the other obtains the corresponding signed distance map. Secondly, with the help of the signed distance map and obtained multi-scale features, we focus on the boundary of the target lesions or organs to be segmented. Finally, we fuse the region and boundary features and acquire the final results. We conduct extensive experiments on two public data sets and compare with seven the representative methods. The results show that the proposed model is superior to the comparative methods in most evaluation metrics, especially boundary tracking.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    0
    Citations
    NaN
    KQI
    []