Fusion Frame Work for Brain Tumor Classification Using VGG16 EfficientNetB0 Fusion Model

DOI : 10.17577/IJERTV13IS030038

Download Full-Text PDF Cite this Publication

Text Only Version

Fusion Frame Work for Brain Tumor Classification Using VGG16 EfficientNetB0 Fusion Model

Tejashwini P S

Computer Science and Engineering UVCE, Bangalore University

Geetha N

Computer Science and Engineering UVCE, Bangalore University

Dr. Thriveni J

Computer Science and Engineering UVCE, Bangalore University

Abstract The rising mortality rate due to brain tumors, characterized by abnormal clusters of rapidly multiplying cells in or near the brain, presents a growing threat. Early detection significantly improves survival chances, making tools with automated assistance imperative for prompt diagnosis. Magnetic resonance (MR) images play a pivotal role in detecting brain tumors, with various deep learning algorithms such as EfficientNetB0 and VGG16 being employed. To leverage the strengths of these algorithms, a Fusion Model is utilized, combining their capabilities. This fusion model was tested on a daunting dataset comprising 7,023 MRI brain tumor images, yielding remarkable results. It achieved 100% accuracy during training and 99% during testing, showcasing its effectiveness in enhancing CNN performance for image classification on the Kaggle dataset Br35H.

KeywordsFusion, Transfer learning, EfficientNetB0, VGG16

  1. INTRODUCTION

    Human brains are safeguarded within the sturdy encasing of the skull. Even a small anomaly inside the brain, such as a tumor, can precipitate severe consequences. Tumors develop when certain areas of the brain experience insufficient oxygen flow, potentially resulting in death or significant brain impairment. According to medical assessments, nearly 700,000 individuals worldwide are afflicted by brain tumors, making their detection and treatment critical. Both magnetic resonance imaging (MRI) and computed tomography (CT) scans are commonly utilized to identify irregularities in brain tissues, including tumors. However, MRI scans are typically preferred by medical practitioners due to their superior imaging capabilities. Automated procedures utilizing medical image processing techniques are increasingly recognized as invaluable aids in identifying brain tumors. However, the diverse shapes, sizes, and locations of tumors present challenges in accurate detection and classification. Medical experts meticulously examine MRI images to pinpoint regions where tumors may be present. Yet, the unclear boundaries of tumors amidst adjacent healthy tissue necessitate extensive manual analysis, potentially leading to inaccuracies in diagnosis. Brain tumor detection can be significantly enhanced through the fusion of deep learning techniques via transfer learning approaches. However, this requires the expertise of seasoned professionals to identify optimal feature extraction

    and segmentation algorithms [1] . Deep learning empowers the analysis of large datasets, facilitating rapid pattern recognition and model development. This technology bridges the realms of technology and medicine, offering new avenues for identifying crucial disease characteristics. Research endeavors seek to enhance treatment efficacy, reduce healthcare costs, and delay brain degeneration through early tumor detection. Pre- processing steps, including feature extraction and selection, are imperative before applying deep learning methods. Recent advancements have generated vast amounts of multimodal imaging data, advancing early tumor detection and classification efforts. However, decision-making remains complex and time-consuming, necessitating consideration of various options. The primary aim of this work is to develop a diagnostic system for early brain tumor detection using MRI images and novel deep learning techniques. By training models on datasets such as Br35HData Kaggle MRI, their efficiency in tumor identification is evaluated using quantitative metrics such as recall, accuracy, precision and f1- score [2]. The rapid evolution of machine learning, particularly deep learning, has revolutionized medical imaging by enhancing the representational capacity of convolutional neural networks (CNNs) [3] [4] . This paper advocates for an automated assisted method for brain tumor detection based on the fusion of deep learning features obtained from various techniques. Feature fusion improves model performance by combining lower- and higher-level features into cohesive vectors. The proposed methodology is evaluated using quantitative metrics on the Br35H dataset, showcasing its effectiveness in multi-classification tasks. The system's primary contributions include automated feature extraction, deep feature fusion, and tumor type classification. Despite challenges such as imperfect backgrounds and MRI artifacts, the proposed methodology enables successful tumor categorization. The subsequent sections of this paper delve into relevant literature, materials and techniques used, model description, potential future work, and concluding remarks, with the model's performance showcased through graphs and a confusion matrix.

  2. RELATED WORKS

    This section provides an overview of the various historical perspectives researchers have explored over the past few decades to tackle the issue, along with subsequent advancements in the field. The increasing popularity of deep learning stems from its promising applications in diagnosing diseases associated with tumors. Recent releases of different deep learning algorithms as tumor detection aids have assisted physicians in making informed decisions regarding treatment options. Several CNN models, such as GoogLeNet [7], VGG [6], and AlexNet [5], are currently employed in medical image classification applications. The key research directly relevant to this inquiry is summarized as follows. Anichur Rahman et al. [8] present two deep learning models for binary and multiclass brain tumor diagnosis, utilizing datasets comprising 3064 and 152 MRI images. They utilize a 23-layer CNN and adopt VGG16 for transfer learning to address overfitting in the smaller dataset. These models surpass all previously published state-of-the-art models, achieving remarkable classification accuracies of 97.8% and 100%. In another study, Muhammad Rizwan et al. [9] describe a CAD method employing deep learning, specifically a GCNN with a Gaussian filter, for accurate and efficient brain tumor identification. Achieving an impressive accuracy of 99.31%, the system amalgamates predictions from five finely-tuned pre-trained models (GoogleNet, AlexNet, ShuffleNet, SqueezeNet, and NASNet- Mobile) through a hybrid approach employing majority voting [10]. Employing image preprocessing, extensive data augmentation, and feature extraction from various CNNs (AlexNet, GoogLeNet, and ResNet18), the system enhances tumor classification using SVM and KNN, attaining a remarkable 99.7% accuracy on a substantial dataset [11]. Utilizing a concatenate layer to blend the outputs of the Xception and NASNetMobile architectures [12], a dropout layer to address overfitting in the CNN, and transfer learning to amalgamate the two architectures, the model achieves exceptional performance.

    The preprocessing also includes optimization for "Best windowing of images." Togacar et al. [13] introduced BrainMRNet, which utilizes the hypercolumn technique and attention modules. Prior to reaching attention modules, images undergo initial preprocessing. These modules identify significant areas and route the image to convolutional layers. Within BrainMRNet, the hypercolumn approach maintains features from each layer through an array structure in the final layer, resulting in an achieved system accuracy of 96.05%. The efficacy of this approach is validated thrugh tests conducted on three brain MRI datasets. For small datasets with two classes, DenseNet-169 [14] is emphasized, while an ensemble of DenseNet-169, Inception V3, and ResNeXt-50 is recommended for larger datasets with two classes. Furthermore, for extensive datasets comprising four classes, the combination of ShuffleNetV2, MnasNet, and DenseNet-

    169 is identified. Findings consistently demonstrate that Support Vector Machine (SVM) with an RBF kernel outperforms other machine learning classifiers in MRI-based brain tumor classification. Maqsood et al. [15] introduced a method for brain tumor detection utilizing edge detection and the U-NET model. They incorporate fuzzy logic for edge identification alongside a tumor segmentation framework that

    enhances image contrast. Within the U-NET architecture, features are extracted from subband images, focusing on detecting meningiomas in brain imaging. Khawaldeh et al. [19] presented a CNN model for brain tumor and glioma detection, enhancing a pre-trained architecture and achieving an overall accuracy of 91%. Despite significant efforts, further research is warranted to establish a dependable and effective method for categorizing brain MR images. One notable limitation of the research [1619] is its focus solely on binary categorization of brain cancers, overlooking multiclass classification and indicating a need for further investigation to identify specific tumor subtypes. Employing ensemble classifiers for classification, Noreen et al. [20] utilized VGG16, VGG19, and AlexNet for deep feature extraction, achieving a maximum system accuracy of 94.3%. Swati et al. [21] achieved a 94.8% accuracy rate in categorizing MRI images of brain tumors using refined versions of AlexNet and VGG. Saxena et al. [22] employed ResNet, Inception-V3, and VGG-16, with ResNet achieving the highest accuracy of 95%. However, these methods exhibited subpar performance overall, warranting extensive testing prior to real-time deployment. Afshar et al. [37] achieved a 90.89% accuracy rate in classifying and identifying brain tumors using Capsule networks. It's important to note that CapsNets are particularly sensitive to image backgrounds and perform better when trained with segmented images, which adds complexity to the architecture.

  3. MATERIAL AND METHODOLOGY

    This section presents a robust approach for tumor classification in brain MRI images, leveraging the power of deep learning through a convolutional neural network (CNN) and transfer learning techniques. The analysis delves deeply into the architecture of the model, offering detailed insights into its design and training process. Initially, two distinct networks are explored independently: EfficientNetB0 and VGG16. Each network is trained individually to comprehend their unique strengths and performance characteristics in tumor classification. Following this individual exploration, a fusion model is introduced, amalgamating the strengths of both EfficientNetB0 and VGG16. This fusion model is meticulously trained and evaluated, providing a comprehensive assessment of its efficacy in enhancing tumor classification accuracy. Overall, this section offers a comprehensive examination of different deep learning methodologies, culminating in the development and evaluation of a fusion model for improved tumor classification in brain MRI images.

    1. Convolutional Neural Network

      The exceptional performance of Convolutional Neural Networks (CNNs) has sparked a surge of interest among academics, motivating them to tackle previously challenging problems. Over the past few years, numerous CNN designs have emerged, addressing a diverse array of issues across various fields, with particular emphasis on medical image recognition. CNNs typically consist of two main components:

      1. A feature extraction module, comprising multiple stacked layers that employ convolutional layers to learn intricate features from input images, and pooling layers to reduce spatial dimensions while preserving essential information.

      2. A classification module, which integrates fully connected (FC) layers to interpret the learned features and make predictions, enabling accurate image classification. This modular architecture enables CNNs to effectively capture intricate patterns within images and make informed classifications, thereby revolutionizing the landscape of image recognition tasks, especially in domains like medical imaging.

      Fig 1: CNN architecture of EfficientNetB0

    2. Transfer Learning

      Transfer learning (TL) stands out as a pivotal application of acquired image classification methods downstream. TL has garnered substantial attention within the field of artificial intelligence due to its effectiveness in addressing challenges such as shifting learning objectives or scarcity of training data. Remarkably, over the past decade, TL has seen remarkable advancements [33]. Instead of commencing from scratch with extensive datasets, TL harnesses knowledge acquired from completing source tasks across various domains to facilitate target tasks [34]. Utilizing one of the pre-trained architectures offers additional advantages. For instance, it facilitates learning by leveraging pre-trained weights, obviating the need for training extensive models from scratch, a process that typically consumes weeks on large datasets. Moreover, employing pre- trained architectures reduces the computational resources required for model training, making the process more efficient and accessible [35].

    3. EfficientNetB0 Network

      The EfficientNetB0 network utilizes a sophisticated approach called compound scaling, which uniformly scales the dimensions of network width, depth, and image resolution to achieve an optimal balance between computational efficiency and accuracy. This characteristic makes EfficientNetB0, a convolutional neural network architecture, particularly suitable for applications on mobile devices with limited computational resources. As the foundational model of the larger EfficientNet family, EfficientNetB0 strikes the right balance by employing a mobile inverted bottleneck convolutional (MBConv) block design and incorporating scaling factors such as depth (d), width (w), and resolution (r) to systematically adjust the network architecture as shown if fig 1.

      This ensures adaptability to various tasks and makes EfficientNetB0 a versatile solution for a wide range of computer vision applications. EfficientNetB0's flexibility is further enhanced by its fine-tuning capabilities, robust generalization performance across diverse datasets, and pre- trained models. These features enable it to maintain competitive accuracy in image classification tasks while remaining efficient.

    4. VGG16 Network

    The VGG16 architecture is a convolutional neural network (CNN) that was introduced by the Visual Geometry Group (VGG) at the University of Oxford. It is characterized by its simplicity and effectiveness, consisting of 16 weight layers, including 13 convolutional layers and 3 fully connected layers shown in fig 2.

    1. Input Layer : Accepts input images of fixed size, typically 224×224 pixels with three color channels (RGB).

    2. Convolutional Layers (Conv): The network begins with a series of convolutional layers, each followed by a Rectified Linear Unit (ReLU) activation function. These convolutional layers have small 3×3 receptive fields and are designed to learn low-level features such as edges and textures.

    3. Max Pooling Layers (MaxPool): After every two convolutional layers, a max-pooling layer is inserted to reduce spatial dimensions while retaining important features. Max pooling is typically performed over a 2×2 window with a stride of 2.

    4. Fully Connected Layers (FC): The convolutional layers are followed by three fully connected layers. The first two FC layers have 4096 neurons each, and the third FC layer (output layer) has 1000 neurons corresponding to the 1000 classes in the ImageNt dataset (the dataset VGG16 was originally trained on).

    5. Softmax Activation: The final layer applies a softmax activation function to produce class probabilities for classification.

    The architecture is characterized by its deep stack of layers and relatively small convolutional kernels, which allows it to capture intricate patterns and features within images effectively. Despite its simplicity compared to more recent architectures, VGG16 has demonstrated strong performance on various computer vision tasks and serves as a benchmark for CNN architectures.

    Fig 2: VGG16 architecture

  4. PROPOSED MODEL

    This study proposes a methodology for classifying and identifying brain tumors from MRI scans by leveraging deep CNN feature fusion techniques. Fig 3 illustrates the workflow diagram corresponding to this approach. Prior to being fed into the various CNN models for feature extraction, the MRI images undergo preprocessing steps, including splitting for training and testing, normalization, and enhancement. The Softmax Activation layer of the fusion model is employed to merge or amalgamate the extracted deep feature components into a unified feature representation for classification. This suggested approach demonstrates reliability and success, capable of accurately categorizing various forms of brain tumors, such as meningioma, glioma, and pituitary tumors.

    1. Datasets

      MRI images serve as the primary input data for determining the presence of the tumor under consideration, prioritized due to their superior quality and suitability for medical diagnostics. The dataset used for assigning the appropriate classification is compiled from MRI images. Both the Classification-1 and Classification-2 models are trained using an open-access dataset available on Kaggle. Specifically, the Br35H dataset from Kaggle is utilized for Classification-2. The Br35H dataset comprises four class labels: glioma, meningioma, pituitary, and normal. For Classification-2, a total of 7,023 photos are included, with 5,712 utilized for training and the remaining 1,023 for testing.

      Fig 3: VGG16-ENetB0 Fusion Model

      This dataset enables Classification-2 to accurately categorize different types of brain tumors. Table 1 illustrates the distribution of classes in the Classification-2 dataset.

    2. Dataset Splitting

      In machine learning, especially in Convolutional Neural Network (CNN) model development, dataset splitting is a critical step. It involves dividing the data into three distinct subsets: the training set, validation set, and test set. Each subset serves a specific purpose in ensuring the model's effectiveness and generalization capability while guarding against overfitting. The training set forms the foundation of the model, enabling it to learn intricate patterns, essential features, and underlying relationships within the data. Through iterative optimization techniques, such as gradient descent, the model adjusts its parameters to minimize the disparity between its predictions and the observed outcomes in the training data. The validation set acts as a checkpoint during the training process. It assesses the model's performance on unseen data, helping to monitor its ability to generalize while keeping hyperparameters in check. By periodically evaluating

      Table 1: Dataset Distribution

      Class Label

      Tumor Class

      Images

      0

      Glioma

      1321

      1

      Meningioma

      1339

      2

      No Tumor

      1595

      3

      Pituitory

      1457

      the model's performance on the validation set, overfitting can be mitigateda phenomenon where the model becomes too specialized to the training data. The test set remains untouched until the final evaluation stage. It provides an unbiased measure of the model's ability to generalize to previously unseen data, offering insight into its real-world applicability and reliability. Additionally, random shuffling of the dataset before splitting reduces bias and ensures that each subset is representative of the overall dataset. Commonly used split ratios allocate 70% for training, 10% for validation, and 20% for testing, but adjustments can be made based on specific requirements and dataset size. Overall, proper dataset splitting is crucial for developing robust and reliable machine learning models.

    3. Data Preprocessing

      In this phase, essential preprocessing tasks are performed on each subset individually. This may include various data- specific adjustments, such as scaling images or normalization. Additionally, data augmentation techniques can be applied to picture datasets to expand the training set's size and improve model generalization. These techniques may involve operations like rotation, flipping, and zooming.

      Fig 4: VGG16-ENetB0 Fusion Model

    4. Deep Feature Extraction

      Transfer learning finds its primary applications in cross-domain fields, notably in areas like medical image diagnosis. This approach alleviates the need for large datasets and significantly reduces the lengthy training periods typically associated with building custom deep learning models. Our analysis utilized various Convolutional Neural Network (CNN) architectures, such as EfficientNetB0, and VGG16. These CNNs excel as deep feature extractors, adept at capturing significant characteristics autonomously, without the need for manual intervention.

    5. Fusion of Deep CNN Features

      The quality of an input feature vector significantly impacts the performance of a machine learning classifier, particularly in accurately classifying tumors from Magnetic Resonance Images (MRIs). To achieve this, an algorithm capable of creating and detecting characteristics from MRIs is essential. In this specific phase, deep features from transfer-learned Convolutional Neural Networks (CNNs) are combined.

      Feature fusion is a crucial method utilized to merge multiple features from different models into a single feature vector, mitigating reliance on any single model's potentially inferior feature element. The fusion model structure, as depicted in fig 4, integrates features from EfficientNetB0, and VGG16, providing more comprehensive information about MR images than a single vector could offer, thus improving classification results. Employing CNN architectures with diverse designs and depths introduces heterogeneity, overcoming potential drawbacks such as repeated feature spaces from homogeneous architectures. This ensures extraction of various higher-level and lower-level characteristics from MR images. The feature fusion process, organizes each independent feature vector into

      four feature spaces, aligning with the number of classes in the dataset, facilitating efficient tumor classification.

    6. Classification

    At this stage, the dense layer within the constructed fusion architecture processes the feature element. Using the softmax activation function, the labels of the features are compared to determine higher probabilities, subsequently assigning relevant class labels such as pituitary, glioma, meningioma, and normal. This process proves invaluable across various tasks, including image categorization and brain tumor classification, as it aids the system in comprehending intricate relationships and patterns within the data.

  5. EXPERIMENTAL RESULTS AND DISCUSSIONS

    To evaluate the effectiveness of each prediction model, a separate test set comprising 20% of the images from each categorization class was employed. Various metrics including accuracy, loss, precision, recall, and F1-score were utilized to assess the performance of each prediction model. Additionally, accuracy-graph and loss-graph were examined to visualize the models' performance trends.

    The suggested model&#39s performance was further evaluated using metrics such as the confusion matrix, f1-score, overall accuracy, specificity, sensitivity, and precision. The f1-score, particularly emphasized, is preferred over a simple mean as it considers extreme circumstances, being the harmonic mean of recall and accuracy values.

    Fig 5: Accuracy Graph of Fusion model

    This comprehensive evaluation approach provides insights into the models' performance across various dimensions, ensuring a thorough assessment of their effectiveness in classification tasks.

    Fig 6: Loss Graph of Fusion model

    Fig 7: Confusion Matrix of Fusion mode

    Table 2: Classification Report CA:Class-Accuracy, P:Precision, R:Recall, F:F1-score, S:Support, E:EfficientNetB0,

    Table 2 presents the Classification Report, detailing the performance of the Fusion-model for each specific class label, where 0 represents no class label. The report includes precision, recall, f1-score, and support metrics for each distinct class label.

  6. CONCLUSIONS

The rising demand for efficient and unbiased assessment of extensive medical datasets has prompted a surge in MRI-based medical image processing for brain tumor analysis. Early detection of brain tumors is vital for reducing mortality rates and ensuring effective treatment. To address the labor-intensive and subjective nature of manual diagnosis, a transfer learning-based deep learning (DL) model was developed, combining various deep learning approaches for brain cancer classification from MRI images. The proposed fusion model, integrating EfficientB0, and VGG16, achieved exceptional accuracy rates of 100% during training and 98% during testing. Future research directions include enhancing the study's robustness by broadening the scope of input images, exploring the incorporation of 2D and 3D data for brain tumor categorization, and potentially enabling tumor grading with larger datasets, possibly through collaborations with hospitals. Establishing a Benchmark Dataset by relevant Medical Authorities would facilitate method assessment. Various deep learning approaches for brain cancer classification from MRI images. The proposed fusion model, integrating EfficientB0, and VGG16, achieved exceptional results in classifying the type of the brain tumor with accurate fusion model.

Fusion-Model

Label

CA

P

R

F

S

VGG16 E Net

0

0.99

1.00

0.95

0.98

265

1

0.98

0.97

0.96

0.96

269

2

1.00

0.99

1.00

0.99

320

3

0.99

0.94

1.00

0.97

293

Table 3: Accuracy Report E:EfficientNetB0

Fusion-Model

Train

Val

Test

VGG16-E Net

1.00

0.97

0.99

Equations 1 through 4 outline the mathematical formulations of these metrics as follows: Fig 5 and 6 depict the accuracy and loss graphs of the model, respectively. The graphs clearly illustrate that as the number of epochs increases, the accuracy improves and the loss function decreases. In the graph, the training accuracy is indicated by the blue line, while the validation accuracy is represented by the orange line.

REFERENCES

[1] Mohan, P., Veerappampalayam Easwaramoorthy, S., Subramani, N., Subramanian, M., Meckanzi, S., Handcrafted deep-feature-based brain tumor detection and classification using mri images, 2022 , Electronics

11.

[2] Debshree Bhattacharya, Manoj Kumar Nigam. Energy efficient fault detection and classification using hyperparameter-tuned machine learning classifiers with sensors, Measurement: Sensors. Volume 30, December,2023, 100908

[3] J.-S. Kang, J. Kang, J.-J. Kim, K.-W. Jeon, H.-J. Chung, and B.-H. Park. Neural Architecture Search Survey: A Computer Vision Perspective. Sensors, 23(3), 2023, 1713, 1- 17.

[4] H. Lee and J. Song, Introduction to convolutional neural network using Keras; an understanding from a statistician. Commun Stat Appl Methods, 26(6), 2019, 591610.

[5] A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, vol. 25,2012, pp. 10971105.

[6] K. Simonyan and A. Zisserman, Very deep convolutional networks for

large-scale image recognition,,2014,

[7] C. Szegedy and W. Liu, Going deeper with convolutions, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 19, 2015, Boston, MA, USA.

[8] Md. Saikat Islam Khan,a Anichur Rahman,a,b, Tanoy Debnath,a,c Md. Razaul Karim,a Mostofa Kamal Nasir,a Shahab S. Band,d, Amir Mosavi,e,f, and Iman Dehzangig,h, Accurate brain tumor detection using deep convolutional neural network, Computer Struct Biotechnol J.; 20: 2022. 47334745.

[9] Muhammed Rizwan, Aysha Shabbir, Maryam Shabbir , Thar Baker, Brain Tumor and Glioma Grade Classification Using Gaussian Convolutional Neural Network, VOLUME 10, Digital Object Identifier 10.1109/ACCESS.2022.3153108.

[10] Shaimaa E. Nassar , Ibrahim Yasser, Hanan M. Amer and Mohamed A. Mohamed1, A robust MRI-based brain tumor classifcation via a hybrid deep learning technique, in Proc. Int. The Journal of Supercomputing, Jul 2023.

[11] Hareem Kibriya, Rashid Amin , Asma Hassan Alshehri, Momina Masood, Sultan S. Alshamrani , and Abdullah Alshehri, A Novel and Effective Brain Tumor Classification Model Using Deep Feature Fusion and Famous Machine Learning Classifiers, Hindawi Computational Intelligence and Neuroscience Volume 2022, Article ID 7897669.

[12] Hiba Kahdum Dishar and Lamia AbedNoor Muhammed, Detection Brain Tumor Disease Using a Combination of Xception and NASNetMobile, in Int. J. Advance Soft Compu. Appl, Vol. 15, No. 2, July 2023.

[13] M. Toaçar, B. Ergen and Z. Cömert. BrainMRNet: Brain Tumor Detection using Magnetic Resonance Images with a Novel Convolutional Neural Network Model. Medical Hypotheses, 2019, 134. 109531.

[14] J. Kang, Z. Ullah, and J. Gwak, MRI-based brain tumor classification

using ensemble of deep features and machine learning classifiers,

Sensors, vol. 21, no. 6, p. 2222, March 2021

[15] S. Maqsood, R. Damasevicius, and F. M. Shah, An efficient approach for the detection of brain tumor using fuzzy logic and U-NET CNN classification, in Computational Science and Its Applications – ICCSA 2021, pp. 105118, Springer, New York, NY, USA, 2021.

[16] H. Mzoughi, I. Njeh, A. Wali et al., Deep multi-scale 3D convolutional neural network (CNN) for MRI gliomas brain tumor classification, Journal of Digital Imaging, vol. 33, no. 4, pp. 903915, 2020.

[17] S. Maqsood, R. Damasevicius, and F. M. Shah, An efficient approach for the detection of brain tumor using fuzzy logic and U-NET CNN classification, in Computational Science and Its Applications – ICCSA 2021, pp. 105118, Springer, New York, NY, USA, 2021.

[18] M Togaçar, B Ergen, and Z C¨omert, BrainMRNet: brain tumor detection using magnetic resonance images with a novel convolutional neural network model, Medical Hypotheses, vol. 134, Article ID 109531, 2020.

[19] S. Khawaldeh, U. Pervaiz, A. Rafiq, and R. S. Alkhawaldeh, Noninvasive grading of glioma tumor using magnetic resonance imaging with convolutional neural networks, Applied Sciences, vol. 8, no. 1, p. 27, 2018.

[20] N. Noreen, S. Palaniappan, A. Qayyum, I. Ahmad, and M. O. Alassafi, Brain tumor classification based on finetuned moels and the ensemble method, Computers, Materials & Continua, vol. 67, no. 3, pp. 3967 3982, 2021.

[21] Z. N. K. Swati, Q. Zhao, M. Kabir et al., Brain tumor classification for MR images using transfer learning and finetuning, Computerized Medical Imaging and Graphics, vol. 75, pp. 3446, 2019.

[22] P. Saxena, A. Maheshwari, and S. Maheshwari, Predictive modeling of brain tumor: a Deep learning approach, in Innovations in Computational Intelligence and Computer Vision, pp. 275285, Springer, New York, NY, USA, 2021.

[23] N. Ghassemi, A. Shoeibi, and M. Rouhani, Deep neural network with generative adversarial networks pre-training for brain tumor classification based on MR images, Biomedical Signal Processing and Control, vol. 57, Article ID 101678, 2020.

[24] H. Kibriya, R. Rafique, W. Ahmad, and S. Adnan, Tomato leaf disease detection using convolution neural network, in Proceedings of the IEEE International Bhurban Conference on Applied Sciences and Technologies (IBCAST), pp. 346351, Islamabad, Pakistan, August, 2021.

[25] H. H. Sultan, N. M. Salem, and W. Al-Atabany, Multiclassification of brain tumor images using deep neural network, IEEE Access, vol. 7, Article ID 69215, 2019.

[26] M. A. Khan, I. Ashraf, M. Alhaisoni et al., Multimodal brain tumor classification using deep learning and robust feature selection: a machine learning application for radiologists, Diagnostics, vol. 10, no. 8, p. 565, 2020.

[27] Z. A. Sejuti and M. S. Islam, An efficient method to classify brain tumor using CNN and SVM, in Proceedings of the IEEE 2nd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), pp. 644648, IEEE, Dhaka, Bangladesh, January, 2021.

[28] J. Cheng, W. Huang, S. Cao et al., Correction: enhanced performance of brain tumor classification via tumor region augmentation and partition, PloS One, vol. 10, no. 12, Article ID e0144479, 2015.

[29] P. )ejaswini, M. B. Bhat, and M. K. Prakash, Detection and classification of tumour in brain MRI, Int. J. Eng. Manufact.(IJEM), vol. 9, no. 1, pp. 1120, 2019.

[30] K. Kaplan, Y. Kaya, M. Kuncan, and H. M. Ertunç, Brain tumor classification using modified local binary patterns (LBP) feature extraction methods, Medical Hypotheses , vol. 139, 2020.

[31] H. Habib, R. Amin, B. Ahmed, and A. Hannan, Hybrid algorithms for brain tumor segmentation, classification and feature extraction, Journal of Ambient Intelligence and Humanized Computing, vol. 119, pp. 122, 2021.

[32] M. N. Ullah, Y. Park, G. B. Kim et al., Simultaneous acquisition of ultrasound and gamma signals with a singlechannel readout, Sensors, vol. 21, no. 4, p. 1048, 2021

[33] S. Niu, Y. Liu, J. Wang, and H. Song, A decade survey of transfer learning, IEEE Transactions on Artificial Intelligence, 1(2), 2020, 151

166.

[34] F. Maria Carlucci, L. Porzi, B. Caputo, E. Ricci, and S. Rota Bulo, Autodial: Automatic domain alignment layers. in Proceedings of the IEEE international conference on computer vision, 2017, (pp. 50675075). IEEE.

[35] S. T. Krishna and H. K. Kalluri, Deep learning and transfer learning approaches for image classification, International Journal of Recent Technology and Engineering (IJRTE), 2019, 7(5S4), 427432.

[36] T. He, Z. Zhang, H. Zhang, Z. Zhang, J. Xie, and M. Li, Bag of tricks for image classification with convolutional neural networks. in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, (pp. 558567). IEEE.

[37] P. Afshar, K. N. Plataniotis, and A. Mohammadi, Capsule networks for brain tumor classification based on mri images and coarse tumor boundaries, in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 13681372, Brighton, UK, May, 2019.