Skip to content

Aryavir07/From-Detection-To-The-Segmentation-Of-Brain-Tumors

Repository files navigation

Brain Tumor Detection and Segmentation Using Deep Learning

License: MIT

Repository overview✔

  • Using ResUNET and transfer learning for Brain Tumor Detection. This would lower the cost of cancer diagnostics and aid in the early detection of malignancies, which would effectively be a lifesaver.
    To categorise MRI images including brain malignancies, this notebook provides implementations of deep learning models such as ResNet50, VGG16 (through transfer learning), and CNN architectures. After training on 100 epochs, the results showed ResNet50 and VGG16 gave very similar results in classification.
    This notebook uses Dataset from Kaggle containing 3930 brain MRI scans in .tif format along with their brain tumor location and patients information.

Working

  • The project is based on image segmentation, and the purpose of image segmentation is to comprehend and extract information from images at the pixel level.
  • Image segmentation may be used for object detection and localisation, which has a wide range of applications including medical imaging and self-driving automobiles.
  • The initial portion of this project implements deep learning models such as ResNet50, two distinct architectures of the fine-tuned VGG16 model, and a rudimentary CNN model to categorise MRI scans containing brain tumor.
  • In the second part, RESUNET model is implemented to localize brain tumor from classified MRI scans.
  • Using this image segmentation neural network is trained to generate pixel-wise masks of the images.
  • Modern image segmentation techniques are based on deep learning approach which makes use of common architectures such as CNN, FCNs (Fully Convolution Networks) and Deep Encoders Decoders.

ResUNet

Source and Explanation

ResUNet architecture combines UNET backbone architecture with residual blocks to overcome vanishing gradient problem present in deep architecture. ResUNet consists of three parts:

  • Encoder : The contraction path consist of several contraction blocks, each block takes an input that passes through res-blocks followed by 2x2 max pooling. Feature maps after each block doubles, which helps the model learn complex features effectively.
  • Decoder : In decoder each block takes in the up-sampled input from prevoius layer and concatenates with the corresponding output features from the res-block in the contraction path. this is then passed through the res-block followed by 2x2 upsampling convolution layers this helps to ensure that features learned while contracting are used while reconstructing the image.
  • Bottleneck : The bottleneck block, serves as a connection between contraction path and expansion path.The block takes the input and then passes through a res-block followed by 2 x 2 up-sampling convolution layers.

Masks

  • The output produced by the image segmentation model is called MASK of the image.
  • Mask is presented by associating pixel values with their coordinates like [[0,0],[0,0]] for black image shape and to represent this MASK we flatten it as [0,0,0,0]

Workflow

Performance

Model Name Accuracy Balanced Accuracy Recall F1 Weighted F1 Average Precision
CNN 88.61 87.13 88.616 88.60 88.61 88.61
VGG16 Model 1 82.14 82.68 82.14 83.39 82.13 83.14
VGG16 Model 2 83.03 81.59 83.03 83.18 83.03 83.03
ResNet50 98.88 98.71 98.88 98.88 98.88 98.88

Final Results

Deployment UML

Citations and Original Authors:

**Ryan Ahmed** [ https://www.coursera.org/instructor/~48777395 ]

@article{diakogiannis2020resunet,
  title={ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data},
  author={Diakogiannis, Foivos I and Waldner, Fran{\c{c}}ois and Caccetta, Peter and Wu, Chen},
  journal={ISPRS Journal of Photogrammetry and Remote Sensing},
  volume={162},
  pages={94--114},
  year={2020},
  publisher={Elsevier}
}

@article{simonyan2014very,
  title={Very deep convolutional networks for large-scale image recognition},
  author={Simonyan, Karen and Zisserman, Andrew},
  journal={arXiv preprint arXiv:1409.1556},
  year={2014}
}

Releases

No releases published

Packages

No packages published

Languages