Skip to main content

A coarse-to-fine cascade deep learning neural network for segmenting cerebral aneurysms in time-of-flight magnetic resonance angiography

Abstract

Background

Accurate segmentation of unruptured cerebral aneurysms (UCAs) is essential to treatment planning and rupture risk assessment. Currently, three-dimensional time-of-flight magnetic resonance angiography (3D TOF-MRA) has been the most commonly used method for screening aneurysms due to its noninvasiveness. The methods based on deep learning technologies can assist radiologists in achieving accurate and reliable analysis of the size and shape of aneurysms, which may be helpful in rupture risk prediction models. However, the existing methods did not accomplish accurate segmentation of cerebral aneurysms in 3D TOF-MRA.

Methods

This paper proposed a CCDU-Net for segmenting UCAs of 3D TOF-MRA images. The CCDU-Net was a cascade of a convolutional neural network for coarse segmentation and the proposed DU-Net for fine segmentation. Especially, the dual-channel inputs of DU-Net were composed of the vessel image and its contour image which can augment the vascular morphological information. Furthermore, a newly designed weighted loss function was used in the training process of DU-Net to promote the segmentation performance.

Results

A total of 270 patients with UCAs were enrolled in this study. The images were divided into the training (N = 174), validation (N = 43), and testing (N = 53) cohorts. The CCDU-Net achieved a dice similarity coefficient (DSC) of 0.616 ± 0.167, Hausdorff distance (HD) of 5.686 ± 7.020 mm, and volumetric similarity (VS) of 0.752 ± 0.226 in the testing cohort. Compared with the existing best method, the DSC and VS increased by 18% and 5%, respectively, while the HD decreased by one-tenth.

Conclusions

We proposed a CCDU-Net for segmenting UCAs in 3D TOF-MRA, and the obtained results show that the proposed method outperformed other existing methods.

Background

Cerebral aneurysms (CAs) are abnormal bulges that mostly occur in the circle of Willis [1]. Rupture of CAs is the leading cause of subarachnoid hemorrhage (SAH) [2]. Meanwhile, the death and disability rate caused by the first rupture is as high as approximately 30% [3]. Accurate segmentation and reliable analysis of the size and shape of unruptured cerebral aneurysms (UCAs) may be helpful in rupture risk prediction [4]. At the same time, three-dimensional time-of-flight magnetic resonance angiography (3D TOF-MRA) has become one of the most commonly used screening methods because of its noninvasiveness in recent years. Hence, accurate segmentation for UCAs from 3D TOF-MRA images is particularly crucial.

However, due to the various shapes and complex locations of UCAs, the accurate segmentation of UCAs can sometimes be difficult. With the development of deep learning technologies, technical methods based on deep learning models (DLMs) would increase the speed of clinical diagnosis workflow without compromising accuracy. However, the accurate segmentation of UCAs is still challenging. According to the investigation, the study of Sichtermann et al. [5]. was the first to use a convolutional neural network (CNN) to segment UCAs on a 3D TOF-MRA data set, and the dice similarity coefficient (DSC) reached 0.53. They focused on the preprocessing method of the data set while omitting consideration of the segmentation accuracy, which resulted in a low DSC. In addition, Junma [6] at the ADAM 2020 Challenge (https://adam.isi.uu.nl/) trained nnU-Net [7] and achieved a DSC of 0.41, ranking first. They fed the entire image into the model while overlooking the problem of potential feature disappearance with the increasing depth of the model. In this paper, we developed a CCDU-Net for segmenting UCAs in 3D TOF-MRA. In detail, the CCDU-Net was based on a coarse-to-fine segmentation framework. The coarse segmentation model we used was a CNN and the fine segmentation model was the DU-Net we proposed. The dual-channel inputs of DU-Net were composed of vessel image and vascular contour image that could augment the morphological information of vessels with UCAs. Meanwhile, a weighted loss function was designed to adaptively assign weights to the voxels not well-segmented.

Results

Data materials

In total, 270 patients from 2014 to 2021 were included in this retrospective study and annotated by three junior radiologists and a senior radiologist. The patients were randomly split into the training and validation cohort (N = 217), and the testing cohort (N = 53). The average size of UCAs was 5.468 ± 3.283 mm in the training and validation cohort (mean age, 61.4 ± 12.2 years), and 5.373 ± 3.515 mm in the testing cohort (mean age, 59.4 ± 13.9 years), respectively. As shown in Table 1, the distribution of aneurysms in all cohorts covered the internal carotid artery area, middle cerebral artery area, anterior cerebral artery area, posterior cerebral artery area, and basilar artery area, but no vertebral artery area was included in the testing cohort. In addition, the distribution of size of aneurysms can be seen in Fig. 1. As we adopted fivefold cross-validation strategy, we plotted a box plot of one of the folds for display.

Table 1 Profiles of patients
Fig. 1
figure 1

Distribution of size of aneurysms in training, validation and testing cohorts

CCDU-Net segmentation performance evaluation

Using our proposed method to segment UCAs in the testing cohort, CCDU-Net achieved the DSC, HD, and VS of 0.616 ± 0.167, 5.946 ± 6.680, and 0.752 ± 0.226, respectively.

The findings showed that most single UCA of 3D TOF-MRA images (36/49, segmented/total number) were well-segmented, and the distribution covered the internal carotid artery area (23/31), the middle artery area (5/6), the anterior cerebral artery area (7/8), the posterior cerebral artery area (1/1), and the basilar artery area (2/3). Double UCAs of images (2/4) were also well-segmented, and the distribution covered the posterior cerebral artery area (1/1). At the same time, double UCAs of images (2/4) that were not segmented had a distribution that covered the internal cerebral artery area (2/2) and the max diameter ranges from 2.59 to 3.08 mm.

Comparison between CCDU-Net and other methods

The segmentation performances in the testing cohort of our proposed CCDU-Net and other existing methods are listed in Table 2, our proposed method had higher DSC, VS, and lower HD than DeepMedic and nnU-Net. Especially in HD, the segmentation performance of CCDU-Net was one-tenth of those of DeepMedic and nnU-Net. Since the other two methods showed a large number of false positive and false negative areas, the HD values were negatively affected.

Table 2 Segmentation performances of the proposed method compared with other existing methods in the testing cohort

To visually display the segmentation performances of the models mentioned above, several typical TOF-MRA images were selected for comparison. The segmentation results obtained through the models are presented in Fig. 2. The segmentation results of DeepMedic (red), nnU-Net (yellow), and CCDU-Net (orange) were superimposed over the manual segmentation, namely, GT (green). The white arrows represented the position of the segmentation result of the models. The distance between lines of different colors can reflect the value of HD. We found that the segmentation results of the proposed method with low HD in b3, c3, and d3 images were closer to manual annotation, while the other methods did not precisely segment the accurate regions. Observing the results of series a and e, all three models did not segment double UCAs well. However, the number of false positive and false negative areas segmented by CCDU-Net was less than those of other existing methods.

Fig. 2
figure 2

Typical visualizations of segmentation performances of different models in the testing cohort. The segmentation results of DeepMedic (red), nnU-Net (yellow), and CCDU-Net (orange) were superimposed over the GT (green). In addition, the white arrows represented the position of the segmentation results of the models

Ablation experiments of different inputs and loss functions

In this section, we explored the best settings of different inputs and weighted loss functions. Through ablation experiments in the testing cohort, the segmentation performances of different inputs were compared. As presented in Table 3, the dual-channel model trained with the initial loss function achieved higher DSC and VS of 0.591 ± 0.201, 0.738 ± 0.225, respectively, and lower HD of 5.686 ± 7.020 mm, while the single-channel-input network trained with initial loss function achieved DSC of 0.567 ± 0.205, HD of 6.433 ± 8.153 mm, and VS of 0.738 ± 0.217. When the vascular contour image was fed into the network as one of the dual-channel inputs, the DSC was increased by 5%, and HD was reduced by 0.747 mm.

Table 3 Ablation results of different inputs and weighted loss functions in the testing cohort

Regarding the bias of single-channel input, it can be seen from Table 3 that when β was equal to 0.1, the model achieved the best DSC of 0.590 ± 0.194, HD of 5.085 ± 5.787 mm, and VS of 0.741 ± 0.213. Compared with the model trained with the initial loss function, the weighted dice loss function positively improved network performance.

Discussion

In this study, we adopted and trained a CCDU-Net for segmenting UCAs in 3D TOF-MRA, and evaluated it in the testing cohort. The CCDU-Net was a cascade of a CNN and the proposed DU-Net. The following operations were included: the vascular contour was extracted along with the vessel image as the dual-channel inputs of DU-Net to augment morphological information of vessels with UCAs; a weighted loss function was designed to train DU-Net to improve the accuracy of voxels that were difficult to segment. The comparisons with the existing methods showed that our proposed method had higher DSC, VS, and lower HD than DeepMedic (0.286 ± 0.299, 61.999 ± 71.326 mm, 0.502 ± 0.303, respectively), nnU-Net (0.521 ± 0.287, 59.598 ± 83.901 mm, 0.717 ± 0.245, respectively). In particular, the segmentation performance of CCDU-Net in HD was one-tenth of those of DeepMedic and nnU-Net.

Considering the tiny UCAs in 3D TOF-MRA images, the information describing the UCAs would disappear with the increasing depth of the network. Hence, we extracted the vascular contour as one channel of input to augment the morphological information of vessels with UCAs. As can be seen from the ablation results in the testing cohort, the performance of the dual-channel network was improved compared to the single-channel backbone network. Regarding the specific performance, HD was decreased by 0.747 mm compared with the backbone model. HD is the metric to assess the maximum distance between two pointsets and is sensitive to the object contour and important for analyzing the effect of dual-channel inputs on augmenting morphological information. The improvement of HD reflected that the dual-channel inputs indeed promoted the model to learn UCAs contour which meant that morphological information could be helpful during learning.

While verifying the exponential value of the weighted loss function, we chose the value within the interval for experimental comparison, and when β = 0.1, the network performance improved the most. Compared with the network trained with the initial loss function, the model trained with WDL increased DSC by 6%, decreased HD by 38%, and increased VS by 5%. The voxel overlap between the ground truth and the prediction could be intuitively reflected through DSC. The improvement in DSC indicated that WDL did play a role in promoting the network to focus on the voxels of aneurysms that not well-segmented during the training process.

Meanwhile, in the coarse-to-fine segmentation framework adopted in this article, the segmentation performance at the coarse segmentation stage would affect the effect of the fine segmentation. First, in the above experiments, we found that the prediction of DeepMedic cannot guarantee that the VOI contained all cerebral aneurysms after being cropped to a size of 64 × 64 × 64, and the segmentation accuracy would be influenced. Second, if there were too many FP counts at the coarse segmentation stage, the number of cropped VOIs would also increase and even negatively cause the increasing FP counts of the overall network segmentation. Our current research aimed to improve the accuracy of the segmentation of cerebral aneurysms. In our future work, we will try to reduce the false positive count in the segmentation under high-precision segmentation performance.

Conclusions

In this paper, we proposed a CCDU-Net for segmenting UCAs from 3D TOF-MRA images, which included two main operations: extracting the vascular contour image along with the vessel image as the dual-channel inputs of DU-Net and designing a weighted loss function for training. In the experiments, the performance of CCDU-Net has been verified in ablation studies. CCDU-Net achieved the highest DSC and extraordinary HD compared with the existing methods.

Methods

Data collection

The data used in this paper were acquired on 1.5 T or 3 T GE Discovery MR750 and 3 T SIEMENS Verio scanners. Details of the image acquisition parameters are presented in Table 4. The inclusion criteria were as follows: (1) patients with saccular UCAs, and (2) underwent preoperative 3D TOF-MRA. The exclusion criteria were: (1) the images contained serious artifacts, which were based on the judgment of three junior radiologists and a senior radiologist. This study was approved by the Institutional Review Boards of our center and the informed consent was waived.

Table 4 Image acquisition parameters

Development of CCDU-Net

In this study, we adopted CCDU-Net which was a cascade of a CNN for coarse segmentation and the DU-Net for fine segmentation, and the general workflow of CCDU-Net is shown in Fig. 3a.

Fig. 3
figure 3

a Workflow of CCDU-Net we proposed. b Full architecture of DU-Net

First, the preprocessed data were passed through a CNN [8] for coarse segmentation to detect UCAs. Second, the volume of interest (VOI) was generated according to the coarse segmentation result. The VOI coordinates were then transmitted for cropping the vessel image and the vascular contour image at the same position.

Subsequently, VOIs of vessel image and vascular contour image were fed into DU-Net trained with our proposed weighted loss function. Figure 3b shows the architecture of DU-Net. The dual-channel inputs of DU-Net comprised VOIs of vessel image and its contour image. The network was optimized based on the variant 3D U-Net proposed by Isensee [9]. The network was still a four-layer deep structure. In the encoding path, except for the input layer, which was a 3 × 3 × 3 convolution with stride 1, each layer consisted of a 3 × 3 × 3 convolution with stride 2 followed by a context block. The context block was composed of a 3 × 3 × 3 convolution with stride 2 followed by a dropout layer with a dropout probability of 0.3. In addition, residual connections were embedded between the convolution block and the context block to reduce the loss of feature maps. After the penultimate context block, the SE [10] block was embedded. The SE block can assign weights to effective feature channels and suppress invalid features as a spatial attention mechanism. In the decoding path, with the expectation of the output layer that was 3 × 3 × 3 convolution with stride 1, the other layers consisted of a localization block and an upsampling block. After the first up-sampling block, we also embedded the SE block. The localization block included a 3 × 3 × 3 convolution with stride 1 and a 1 × 1 × 1 convolution, which input was the summation that concatenated the output of the upsampling block and the context block. Meanwhile, segmentation layers were employed for deep supervision at different layers in the decoding pathway. The final output was obtained by adding outputs of segmentation layers and activating with the SoftMax function.

Finally, the fine segmentation result was restored to the corresponding position and size of the raw image by resampling, to obtain the segmentation of the aneurysm.

Weighted dice loss function

Inspired by the focal loss [11], we designed a new weighted dice loss function. The purpose was to promote the network to focus on the voxels of UCAs that were difficult to segment. During the training process, the overlap between label and prediction was evaluated by DSC. When the overlap was too small, the segmentation performance was poor. At this time, the loss value was weighted. That is to say, the worse the segmentation performance was, the higher the loss value would be, or vice versa. GT was the abbreviation of ground truth, Pred referred to prediction, and the symbol “||” meant absolute value. β and S were average constant terms, β was derived from experimental inference, and S was a practical value of 0.0001. The formula of our loss function is as shown in (1) below:

$$ {\text{Weighted Dice Loss }} = \left( {1 - {\text{DSC}}} \right)^{\beta } \left( { - 2 \times \frac{{|{\text{GT}}| \times |{\text{Pred}}| + \frac{S}{2}}}{{{\text{GT}} + {\text{Pred}} + S}}} \right) $$
(1)

To satisfy the purpose raised above, we analyzed the value interval of β. Since the calculation formula of DSC was similar to the above parenthetical expression, we set \({\text{Weighted loss}} = - {\text{DSC}}(1 - {\text{DSC}})^{\beta }\), and speculated that in a certain interval of β, when DSC was smaller, the weighted loss would increase; when DSC was larger, the weighted loss would decrease. By analyzing the monotonicity of function, it could be concluded that when the value interval of β was [0,1] the aim of assigning weights to poorly segmented voxels could be achieved. With the preceding analysis, we referred to the value of the exponential in focal loss function [11] and employed arithmetic progression to set the value of β. Models trained with different loss functions were compared based on three metrics and it can be seen from Table 3 that the best setting was β equalled to 0.1.

Training of CCDU-Net

Preprocessing and data augmentation

The data set used in our study being from seven centers. Therefore, data preprocessing was essential to ensure feature similarity. We performed the following operations for preprocessing: (I) N4BiasFieldCorrection [12], (II) cerebral artery extraction [13], (III) Z-Score normalization [14], and (IV) vascular contour extraction. Among them, the vascular contour was extracted by the Sobel [15] operator. The processed data set was divided into the training cohort, validation cohort, and testing cohort, and the following further processing was done for the training and validation cohorts.

In the coarse-to-fine segmentation framework, the inputs were different. For the CNN, the input was a single-channel vessel image of 128 × 128 × 128. When training the CNN, we adaptively dilated the aneurysm (label = 1) according to the UCA size; For the DU-Net, the inputs were dual-channel images comprising of VOIs of vessel image and vascular contour image. In detail, we took the centroid of the label as the centre of a cube and cropped the VOIs of 64 × 64 × 64. In addition, motivated by the augmentation approaches delivered in brain-tumor segmentation [16], the training cohort was augmented eight times through flipping along the z-axis, discrete Gaussian filtering [17], and histogram equalization. In detail, we first performed flipping on the initial data set for augmentation twice, and then Gaussian filtering was performed on the initial data set with the flipped data set for augmentation forth. Finally, the histogram equalization was performed for eighth augmentation.

Training

The training was divided into two stages: first, when training CNN, we performed fivefold cross-validation on a Tesla V100 (NIVIDA) GPU with 16-GB VRAM. The primary software environment included: Python 3.6, CUDA 10.0, and TensorFlow-GPU 1.14.0. In this article, the following parameters were set: the number of iterations was 700; the batch size was 10; the learning rate was 1e−3 initially and dropped gradually; L1 was regularized to 1e−6, L2 was regularized to 1e−4; RmsProp [18] was used as an optimizer. Second, when training DU-Net, we performed fivefold cross-validation on a GeForce RTX 2080 Ti (NIVIDA) GPU with 11-GB VRAM. The primary software environment included: Python 3.6, CUDA 10.0, Keras 2.3.1, and TensorFlow-GPU 2.0.0. In this article, the following parameters were set: the number of iterations was 500; the batch size was 1; the learning rate was 5e−4 initially and decreased to 1/2 of the last time if the validation loss did not improve within 10 iterations, training would be stopped after 50 epochs without the validation loss improving; Adam was used as an optimizer. The training curves of the CNN and DU-Net in one of the folds are shown in Fig. 4.

Fig. 4
figure 4

Training curves of the CNN (left) and DU-Net (right)

Statistical analysis

The following metrics were used for evaluation: DSC, Hausdorff Distance (HD), and Volumetric Similarity (VS). GT was the abbreviation for ground truth which was based on the manual segmentation of three junior radiologists and the final check was performed by a senior radiologist with experience of 21 years. Moreover, Pred referred to prediction. The formulas of these metrics are shown in (2), (3), (4):

$$ {\text{DSC}} = 2 \times \frac{{{\text{GT}} \cap {\text{Pred}}}}{{{\text{GT}} \cup {\text{Pred}}}} $$
(2)
$$ {\text{HD = max}}\left( {{\text{h}}\left( {\text{GT, Pred}} \right){\text{, h}}\left( {\text{Pred, GT}} \right)} \right) $$
(3)
$$ {\text{VS}} = 1 - \frac{{\left| {{\text{GT}} - {\text{Pred}}} \right|}}{{\left| {{\text{GT}}} \right| + \left| {{\text{Pred}}} \right|}} $$
(4)

Availability of data and materials

Not applicable.

Abbreviations

TOF-MRA:

Time-of-flight magnetic resonance angiography

DSA:

Digital subtraction angiography

CNN:

Convolutional neural network

UCA:

Unruptured cerebral aneurysm

VRAM:

Video random access memory

GPU:

Graphics processing unit

CUDA:

Compute unified device architecture

MCA:

Middle cerebral artery

PCA:

Posterior cerebral artery

ICA:

Internal carotid artery

ACA:

Anterior cerebral artery

BA:

Basilar artery

VA:

Vertebral artery

FP:

False positive

References

  1. Kayembe KNT, Sasahara M, Hazama F. Cerebral aneurysms and variations in the circle of willis. Stroke. 1984;15(5):846–50.

    Article  Google Scholar 

  2. Suarez JI, Tarr RW, Selman WR. Aneurysmal subarachnoid hemorrhage. N Engl J Med. 2006;354(4):387–96.

    Article  Google Scholar 

  3. Joseph JJ, Donner TW. Long-term insulin glargine therapy in type 2 diabetes mellitus: a focus on cardiovascular outcomes. Vasc Health Risk Manag. 2015;11:107–16.

    Google Scholar 

  4. Ji W, Liu A, Lv X, Kang H, Sun L, Li Y, Yang X, Jiang C, Wu Z. Risk score for neurological complications after endovascular treatment of unruptured intracranial aneurysms. Stroke. 2016;47(4):971–8.

    Article  Google Scholar 

  5. Sichtermann T, Faron A, Sijben R, Teichert N, Freiherr J, Wiesmann M. Deep learning-based detection of intracranial aneurysms in 3D TOF-MRA. AJNR Am J Neuroradiol. 2019. https://0-doi-org.brum.beds.ac.uk/10.3174/ajnr.A5911.

    Article  Google Scholar 

  6. Timmins KM, van der Schaaf IC, Bennink E, Ruigrok YM, An X, Baumgartner M, Bourdon P, De Feo R, Noto TD, Dubost F, et al. Comparing methods of detecting and segmenting unruptured intracranial aneurysms on TOF-MRAS: the ADAM challenge. Neuroimage. 2021. https://0-doi-org.brum.beds.ac.uk/10.1016/j.neuroimage.2021.118216.

    Article  Google Scholar 

  7. Isensee F, Jaeger P, Kohl S, Petersen J, Maier-Hein K. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods. 2021;18:1–9.

    Article  Google Scholar 

  8. Kamnitsas K, Ledig C, Newcombe VFJ, Simpson JP, Kane AD, Menon DK, Rueckert D, Glocker B. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal. 2017;36:61–78.

    Article  Google Scholar 

  9. Isensee F, Kickingereder P, Wick W, Bendszus M, Maier-Hein KH. 2018 Brain tumor segmentation and radiomics survival prediction: contribution to the BRATS 2017 challenge. In: Brainlesion Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. 287–297.

  10. Hu J, Shen L, Sun G: Squeeze-and-excitation networks. In 2018 IEEE/CVF conference on computer vision and pattern recognition. 2018, 42: 2011–2023.

  11. Lin TY, Goyal P, Girshick R, He K, Dollar P. Focal loss for dense object detection. IEEE Trans Pattern Anal Mach Intell. 2020;42(2):318–27.

    Article  Google Scholar 

  12. Tustison N, Avants B, Cook P, Zheng Y, Egan A, Yushkevich P, Gee J. N4ITK: improved N3 bias correction. IEEE Trans Med Imaging. 2010;29(6):1310–20.

    Article  Google Scholar 

  13. Chen G, Wei X, Lei H, Liqin Y, Yuxin L, Yakang D, Daoying G. Automated computer-assisted detection system for cerebral aneurysms in time-of-flight magnetic resonance angiography using fully convolutional network. Biomed Eng Online. 2020;19(1):38.

    Article  Google Scholar 

  14. Goyal H, Sandeep D, Venu R, Pokuri R, Kathula S, Battula N. Normalization of data in data mining. Int J Software Web Sci. 2014;10:32–3.

    Google Scholar 

  15. Sobel I, Feldman G: An Isotropic 3×3 image gradient operator. 1968.

  16. Nalepa J, Marcinkiewicz M, Kawulok M. Data augmentation for brain-tumor segmentation: a review. Front Comput Neurosci. 2019;13:83.

    Article  Google Scholar 

  17. Lindeberg T. Discrete scale-space theory and the scale-space primal sketch. Norstedts Tryckeri: Stockholm; 1991.

    Google Scholar 

  18. Schaul T, Antonoglou I, Silver D. Unit tests for stochastic optimization. Arxiv. 2014. https://0-doi-org.brum.beds.ac.uk/10.48550/arXiv.1312.6055.

    Article  Google Scholar 

Download references

Funding

This research was funded by National Natural Science Foundation of China (Grant Number 81971685), National Key Technology Research Development Program (Grant Number 2018YFA0703101); Science and Technology Commission of Shanghai Municipality (Grant Number 19411951200); Suzhou Health Science & Technology Project (Grant Number GWZX201904); Youth Innovation Promotion Association CAS (Grant Number 2021324) and Quancheng 5150 Project.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, MC and CG; form analysis, MC; methodology, MC and CG; visualization, MC; writing—original draft, MC; investigation, CG and JJZ; validation, CG and DDW; writing—review and editing, CG and FML; data curation, DDW, RYD and SRP; supervision, YXL and YKD; software, JJZ and ZYZ; project administration, YXL and YKD. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Yuxin Li or Yakang Dai.

Ethics declarations

Ethics approval and consent to participate

The ethics board of Huashan Hospital comprehensively reviewed and approved the protocol of this study.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, M., Geng, C., Wang, D. et al. A coarse-to-fine cascade deep learning neural network for segmenting cerebral aneurysms in time-of-flight magnetic resonance angiography. BioMed Eng OnLine 21, 71 (2022). https://0-doi-org.brum.beds.ac.uk/10.1186/s12938-022-01041-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12938-022-01041-3

Keywords