- Open Access
- Authors : Kotla Siddardha , Nemali Vishnu Vardhan Reddy , Kotapothula Ajay Kumar , Kotapothula Arun Kumar, G Mahitha
- Paper ID : IJERTV12IS040236
- Volume & Issue : Volume 12, Issue 04 (April 2023)
- Published (First Online): 30-04-2023
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Multi Class Breast Cancer Classification using Federated Learning
Kotla Siddardha Department of Computer Science and Engineering,PES University
PES University Bengaluru,India
Kotapothula Ajay Kumar Department of Computer Science and Engineering,PES University PES University
Bengaluru,India
Kotapothula Arun Kumar Department of Computer Science and Engineering,PES University PES University
Bengaluru,India
Nemali Vishnu Vardhan Reddy Department of Computer Science and Engineering,PES University PES University
Bengaluru,India
G Mahitha Department of Computer Science and Engineering,PES University
PES University Bengaluru,India
Abstract In developed nations, breast cancer is the most prevalent form of cancer in women, around 12 percent of the cases are within the age group of 20 to 34. With the development of modern engineering techniques, there has been a notable improvement in the medical field's diagnosis in recent years. Breast cancer survival has increased significantly, and their impact on quality of life have become increasingly important, hence Classification of the breast tumors into cancerous or non- cancerous became extremely necessary for early detection of breast cancer. Along with it maintaining the patients data confidential is also high priority , the use of this medical data by unauthorized persons could lead to situation of chaos. These days the quality of classification of breast tumors highly depends on the representation of the data. This is the process that takes a lot of time to use knowledge to produce required characteristics from the histopathology images or the mammograms and there are deep learning techniques which extract the required features without a need to design the feature extractors. In this paper we have used a special type of Convolution neural network (CNN) called Dense Net for the purpose of classification. The model not only classifies the data into benign or malignant but also identifies the 8 subclasses present within them. The dataset used for this purpose is Break His dataset, this contains 7,909 histopathology images of breast cancer. To date, it contains 2,480 benign and 5,429 malignant. A concept of federated learning is also introduced in our project along with the multi class classification to maintain a decentralized learning approach and avoid data breach. In this we have used a concept of local model and global model for achieving decentralized learning.
Keywords DenseNet, Federated Learning, Convolution Neural Network, Deep Learning, Histopathology images, Breast Cancer
-
INTRODUCTION
Breast cancer is the most frequent cancer among women worldwide, except for skin cancers. It accounts for roughly 30% (or 1 in 3) of the new cases of female cancer each year. According to the American Cancer Society's projections, In 2022 there will be 51,400 new instances of Ductal Carcinoma in Situ (DCIS), 2,87,850 new cases of Invasive Breast Cancer(IDC) in women, and 43,250 new breast cancer deaths.
Worldwide, more than 2.3 million women received a breast cancer diagnosis in 2020, and 685,000 of them have lost life. In the world, every 14 seconds there is a case of breast cancer diagnosis. The incidence of breast cancer has risen by more than 20 percent globally since 2008. Death rates have increased by 14%.
There are different types of breast cancer that are classified based on the location they are present within the breast, a few of them are ductal carcinoma in situ, invasive ductal carcinoma, inflammatory breast cancer, and metastatic breast cancer. Ductal carcinoma in situ (DCIS) is a type of non- invasive cancer in which the lining of the breast milk duct has been found to contain abnormal cells. The adjacent tissue has not been invaded by the abnormal cells that have left the ducts, whereas in Invasive ductal carcinoma(IDC) the abnormal cells which are developed in the lining of the milk ducts start spreading to other adjacent tissues making them a type of invasive cancer, if the abnormal cells are present in the lobules then the cancer is called lobular cancer and the Breast cancer that has invaded the skin and lymphatic vessels of the breast is referred to as inflammatory breast cancer. It frequently results in no clear tumor or lump that is localized in the breast and can be felt. The stage four of the breast cancer is called the metastatic breast cancer , in the stage the cancer spreads to other parts of the body, usually lungs, bones or liver .
As we have seen how early identification and classification of breast cancer can reduce the risk of spreading abnormal cells. Most of the authors who worked on the classification of the breast tumors had determined only the tumors is benign or malignant but that itself isn't not sufficient according to current requirements. So, in our paper we have focused on the method in which classification of breast tumors are classified into 8 classes. We have used a special type of convolution neural network for this classification purpose which is DenseNet 121. The dataset used for this model is BreakHis dataset.
In the medical sector the patient's private information should be confidential because patients regularly provide their private or sensitive information to healthcare professionals like doctors, nurses, hospitals, etc. Maintaining privacy and confidentiality should be a major priority for health care
professionals because if a patient believes that their information is not safeguarded and there is a danger that it may be revealed, they may decide not to provide it in the first place, which might create a huge loss to the people who work on research using the medical data. Therefore, in our paper we are using a decentralized approach in which the medical data cant be accessed by the owners who are using the model which is Federated Learning.
The following sections make up this paper: starting with Section II Related work, which is followed by section III Methodology then section IV covers the results and discussion, and section V is the conclusion.
-
RELATED WORK
Ahmed Iqbal Pritom et al.,[1] In the experiment , three methods of knowledge mining are explored : K means Classifier, C4.5 Tree, and Naive Bayes estimate future re- emergence of carcinoma. A learning tool for most of these categories was the Weka tool. For data processing, the Weka set of machine learning techniques was used. For eliminating some lower ranked features, an effective feature selection algorithm was used to increase each model's accuracy. Not only were those attributes insignificant contributors, but their incorporation also led the categorization algorithms astray. To maintain the correct balance and classifier functionality,10 folded cross verification methods for all three algorithms are used. For k-fold cross-validation, the initial sample was randomly divided into k samples of equal size. One sample state was retained as a validation test model, while the remaining k-1 samples were used as training data for all samples other than k. 66% of the database for training purposes was used, while the remaining 34% was available for testing. To increase predictability's precision, the Ranger algorithm was used for the optimal feature selection and the removal of unnecessary and irrelevant characteristics. It had been decided to test Naive Bayes and C4.5 Decision Tree using InfoGainAttributeEval, which examines value attributes by calculating the gain of class data. On the other hand, SVMAttributeEval was chosen in SVM as an adjective test that evaluates the attribute's value using th SVM separator. In this case, attributes were computed using the SVM's squared weights. The choice of multiclass problem characteristics was controlled by level class attributes acting independently using a one-vs-all approach to "interact" with the highest of each lot to offer a final status.
Umme Salma M et al.,[2] In this paper, researchers have proposed a method to efficiently identify features from high- density breast cancer data sets by combining a clustering algorithm with random probability distribution. The K-means faster algorithm was used, which is a better version of the K- means algorithm and is considerably faster and more accurate than a generic algorithm, to choose reliable and compatible features. It has the quick K-means algorithm built in. A superior production algorithm was Particle Swarm Optimization. The results were analysed in multiple performance test steps and certified using different category tactics. The results were extremely encouraging in nature. The approach in the KDDcup 2008 data set provided precision that was 99.39 percent, and the complexity of its time was found
to be O (log (k)), according to the subset of features obtained using a K-based PSO. The attribute with the highest accuracy was stated first, followed by the next, and so on. They have chosen the top ten qualities for efficiency's purpose.
Daffa Fajri Riesaputri et al [3] The focus of the paper is the classification of breast cancer using mammogram images and probabilistic neural networks (PNNs); to maximise PNN performance, noise was reduced using median filters, and segmentation Gaussian Mixture Model and feature extraction from the Grey Level Co-occurrence Matrix are also used. The PNN algorithm was selected because it performs well with limited data sets. The prediction accuracy in this research has the greatest value based on comparisons with other approaches that have been used. This is achievable because the PNN method was effective at solving a classification problem when a median filter was used to eliminate salt and pepper noise in pictures and when GMM and GLCM segmentation are combined to streamline the network's structure and improve diagnosis accuracy. The prediction accuracy in this research has the greatest value based on comparisons with other approaches that have been used. PNN classifies tumours into three types which are 1. Normal breast cancer 2. Benign breast cancer 3. Malignant breast cancer in the model. The suggested approach can map each pattern optimally, doesn't need a lot of data, and fixes issues with back-propagation.
Qingfang He et al. [4] proposed a concurrent non – homogenous 8 class classification model using the BreakHis dataset for Breast Cancer. The photos in the dataset are 700×460 pixels in size and were acquired using different magnifying components in a 3-channel RGB true colour space. There are two types of subsets in the dataset they are benign and malignant subsets. There are 2480 samples in all in the benign subset, which is divided into four groups. There are 5429 samples in the four subgroups of the malignant subset. For balancing the dataset non repeated random cropping techniques is used and the picture segmentation method is used to increase the data set. The pre-trained models Resnet50+ fine-tuning and VGG16+ fine-tuning is combined in the BCDnet model used in this study. The parallel convolution platforms of the model are the VGG16 convolution platform and the Resnet50 convolution platform. Input layer, Module 1, Module 2, Concatenate Layer, and Soft – max Classification Layer make up the model. The input layer is 230x230x3 in size. Conv Base1, flatten layer, and two thick layers make up Module 1. The pre-trained Resnet50 model is converted into Conv Base1.Conv Base, a flatten layer, and two thick layers make up Module 2. The VGG16 pre-trained model is converted into Conv Base2. The output of the two modules is combined in the concatenate layer, which then feeds the Soft – max classification layer with the output of the prediction. The most often employed indicators for model evaluation are accuracy and precision. The model produces a result that is higher than 98%.
Shubham Sharma et al. [5] proposed a study of ML techniques for the detection of breast cancer. The breast cancer diagnosis dataset from Wisconsin is used. There are no missing values in the data collection, which comprises 569 objects and 32
features. The output variable has 357 benign observations or 212 malignant observations. The supplied data is separated into k equal-sized bits and cross-validated using the k-fold method. This study compares the machine learning methods Random Forest, KNN, and Naive Bayes. Accuracy, Precision, Recall, and F1 Score are the parameters used in this research to evaluate the effectiveness of machine learning algorithms. In this study, the data set is divided into ten separate chunks using the 10-fold technique. In the system, nine of the folds are utilised for training, while the final dataset is used for testing and analysis. Out of the 569 observations, 398 instances are used as the training dataset and 171 for the testing set. Each algorithm is more than 94 percent accurate. Due to its superior performance compared to the other algorithms in terms of accuracy, precision, and F1 score, KNN is the most effective approach for detecting breast cancer.
Bichen Zheng et al.,[6] proposed a hybrid strategy for breast cancer screening. Wisconsin Diagnostic Breast Cancer (WDBC) data set is the one that is used. 32 attributes in 10 categories, totalling 569 instances, make up the data set, which is used to diagnose the cancer outcome. In this study, a hybrid of the K-means and support vector machine (K-SVM) techniques is constructed to extract important information and identify the tumour. The hidden patterns of benign and malignant tumours are recognized individually using the K- means method. The training model calculates and treats each tumour's participation in these patterns as a new feature. The new classifier is then created using an SVM to distinguish between the incoming tumours. Before the actual data is trained to create the classifier, it separates both malignant and benign tumour patterns to decrease the high complexity of the feature space. Using feature extraction, the patterns are recognised. Tenfold cross validation is used. The K-SVM offers consistent and excellent prediction quality. In order to help the machine learning technique support and optimise the classifier, the K-SVM decreases the dimension of the input feature space and recreates the format of the features. The progress of feature extraction and selection resulted in a shorter training period for the classifier and increased classifier accuracy due to the removal of noisy data. Accuracy for the K- SVM method is 97.38 percent.
Cuong Nguyen .et al [7] proposed a ML approach based on feature selection and random forest methods for the diagnosis and prognostication of breast cancer. WBCDD and the Wisconsin Breast Cancer Prognosis Dataset (WBCPD) are the two sets of datasets used in this paper. The learning algorithm is trained using n-fold cross validation. A feature ranking process is used. In the feature ranking process, it calculates the ranking values of the features, estimates the Bayesian probability, and order the features. The model has a 99.82 percent classification accuracy for WBCDD and a 99.7 percent classification accuracy for WBCPD.
Swati Meena Et.al [8] They have worked on BreakHis dataset and BACH dataset . The Localization of micro histopathology pictures is intended to improve the understanding of categorization results. The classification problem is framed as a weakly supervised multi-instance problem in the proposed
methodology, in which an image is a group of patches. Finding the classification label on the bag, which contains many instances, is the first step in this procedure. To locate the cancerous and normal parts in an image and utilize them to classify the image, they have employed Attention-Based Multiple Instance learning (A-MIL), which learns attention on the patches from the image. The instance-level pooling is the most important component of MIL. To acquire bag level features, instance level pooling collects features at the instance level. A-MIL permits various weights for various occasions in a bag. The bag is extremely useful for the bag-level classifier. The authors evaluated their model against conventional transfer learning methods like customNet, pretrained VGG16, and pretrained ResNet18. The A-MIL model successfully achieved a high accuracy of 86.45 % for 100x pictures. The model also shows the meaningful interpretation of ROIs.
Yongdong Chen .et al [9] Using ultrasound scans, the authors of the research suggested a model classification system for breast tumours. The Cancer Institute clinicians helped gather the BUS images dataset, which was utilised to train the model. The BP neural network technique and biclustering are both included in the suggested method for categorising breast tumours based on BI-RADS score. 25 different types of features are obtained from the dataset chosen by them .To avoid the problem of selecting the main features required for the model ,a feature selection scheme is developed using the BI-RADS technique; they have created a criteria for assessing the choice. Then bicluster, which is a regionally consistent pattern composed of a subset of the original matrix's characteristics (columns)and tumour cases (rows), is created. The instances with identical scores for a particular collection of attributes most likely share the same diagnostic finding. As a consequence, a new dataset is created with columns for the distance to each diagnostic rule and rows for the incidences of breast cancer. They sent this information to the BP neural network, which divides it into benign and malignant tumours. When the model is compared to other models, the proposed model has a 96.1 % accuracy rate.
Majid Nawaz et. al. [10] they have proposed a model on multiclass breast cancer classification using deep learning methods. The chosen CNN model is the DENSENET model, which acts as a backbone; they have also used a feedforward operation to identify the mapping function and retro propagation methods, specifically the gradient descent algorithm, to minimize the loss function. They have modified the dense net model to utilize histopathology images for the classification . To categorize breast cancer tumours, the model has 3 transition layers and 4 dense blocks. Instead of the 1000 classes in the ImageNet dataset, they constructed a SoftMax function with 8 classes of breast cancer histopathology photos, and they initialized the model layers with the weights of the pre-trained model on the ImageNet dataset. So that the pre- trained weights of the dataset are kept and the last layer is properly updated, they used the last layer with this dataset. The first convolutional layer of the network is then unfrozen, and using the training data from Break His, the entire network is adjusted. When tested with data set aside for validation for all magnification factors once the training phase is complete, the
model achieves average picture level accuracy of 95.4 percent and patient level accuracy of 96.48 percent.
Abdulhamit Subasi .et al [11] The authors worked to create an efficient automatic diagnostic system that could discriminate between benign and malignant breast tumours. They have used two separate WBC datasets to assess the suggested approach.
30 properties are extracted from the first dataset that was selected, and 9 attributes are extracted from the second dataset. Using genetic algorithms, useful and meaningful features are initially extracted by removing unimportant features. By using this technique, data mining becomes less computationally complex and moves more quickly. The second stage of data mining employs a variety of methods. These methods include Bayesian Network, MLP, RBFN, Random Forest, Logistic Regression, Decision Trees, and SVM. In order to develop novel classifiers, the authors used the Rotation Forest method. Data is divided into subgroups and PCA is applied to it in the rotating forest approach; new features are created by linearly transforming the data. In the second step, various individual and multiple classifier systems were built in order to provide an accurate system for classifying breast cancer. 97.41 % for the multiple classifier system (MCS) tool Rotation Forest without GA. Accuracy achieved with genetic algorithms and rotation forest is 99.48 %, compared to the model without the GA it has achieved greater accuracy.
Peter Adebayo Idowu et al [12] used the LASUTH breast cancer data collection, which contains 69 instances along with 17 attributes. Based on patient data from the LASUTH dataset, this study employs data mining techniques to categorise the risks of developing breast cancer. They used data mining methods, including a tool that the breast cancer was categorised by the WEKA software using J48 decision trees and naive bayes. A probabilistic model that relies on the Bayes theorem was the naive Bayes classifier. The fundamental idea of ID3 was to construct a decision tree based on the statistical phenomena referred to as information gain. The experimental results of j48 decision tree are such that out of total 69 training instances 65 are correctly classified and 4 are incorrectly classified. The results of naive bayes are such that out of total 69 instances 57 are correctly classified and 12 are incorrectly classified. Thus, the accuracy of j48 and naive bayes are 94.2,
82.6 respectively. So, the j48 decision tree was a better approach than naive bayes from this study.
Lal et al. [13], described methods that could be used to differentiate between malignant and non-cancerous mammograms include the Bayesian approach, SVM kernels, and DT. They made advantage of the SIFT, EFDs, morphology, and texturing characteristics. Performance metrics for these algorithms include specificity, sensitivity, NPV, FPR, and AUC. The results demonstrate that by utilising the specified settings can obtain either strong sensitivity or good specificity with given parameters. Data sets were taken from publicly available database of university of south Florida. The database has 2500 studies approximately. They used normal 12 and cancer 15 latest volumes that has a total of 899 images including 500 cancer images.
-
METHODOLOGY
The methodology used for this project is explained in two sections. The model uses a concept of federated learning which is explained in the section A and the local model which is used in the federated learning is DenseNet which is explained in section B.
-
FEDERATED LEARNING
Due to the importance of confidentiality in medical data, the data should not be collected or analysed by an institution or a person who doesn't have authority to access them and there should be a concept of decentralized access to data. Therefore To implement the concept of decentralized learning in our research, we have used a methodology of federated learning .
Federated learning enables individual sites to jointly train a global model. Without explicitly sharing datasets, federated learning involves combining training results from various sites to produce a global model so that Hospitals and other medical facilities can keep their data localized, since the model is trained by dispersing itself to far-flung silo data centres. Throughout training, no collaborator's data is ever sent or exchanged. As in traditional deep learning, the central server does not get the data, but instead maintains a global shared model that is shared among all institutions.
Now each institution has a separate model which trains on its own patient data, then these local models send their trained information of weights or error gradients to the global or central server. The central server aggregates the information sent by the individual local models and updates the global model. The data from the individual centres add value to the global model by extracting more features. This is a single round in federated learning, till the model gets completely trained .
In our model we mimicked the concept of federated learning. The Data set used for this process is BreakHis dataset. The dataset is extracted and pre-processing is performed on it ,such as normalization of data and resizing of images. After completion of pre-processing of the data the image list and label list is prepared from it . As our first step in federated learning concept, we created multiple clients which act as individual data centres or institutions. A function is created to create clients, which take images and labels list and require a number of client indexes as input . The data is equally shared among the created clients by sharding data. The number of shards are equal to the number of clients.
The batching of client shards of data is performed. The client data shard is taken and a TensorFlow dataset object is created from it. The labels and data are separated in it. The test data and trained data sets are created for the clients.
Now a local model is created for training the data present in the clients . The model used here is a special CNN called dense net which is used for the multiclass classification of breast tumours. In our experiment both the local and global models are the same. In the next part of this section, we will learn information about the model, in this part of the section we will see only concepts related to federated learning.
After the local and the global models are ready, we have initialized a variable of COMM rounds in which we define how many rounds the model of federated learning takes to learn all the features and update the global model.
Initially the global model is pre-trained by the weights of image net and results in some weights which are used by all local models. A local model is selected randomly and the weights of the global model are assigned to it in the first iteration, the local model is now executed by providing the appropriate optimizer and the metrics to be found .
The weights extracted from the local model are now scaled and a scaling factor is applied on it. We calculate the total training data points across the clients and now get the data points held by a client, then the weight scaling factor is the division of the local and global data points . Now the weights are appended to a list after scaling.
After execution of all the local models the scaled weights of all the local models are collected and appended to a list. we calculate average weights after all the communication rounds. The global model is now updated with these average weights.
For every iteration the global is updated by the average weights and new features are added by this. Since in the first round the local model uses the previous global model weights the local model also gets updated. By this we can ensure that the global does not store or access the confidential data but only has access to data about the weights of the local model. This implements the concept of federated learning.
In the next part we will look at the global model and the local model that is dense net and following we will see the dense net architecture.
-
DENSENET
The equations are an exception to the prescribed specifications of this template. You will need to determine whether or not your equation should be typed using either the Times New Roman or the Symbol font (please no other font). To create multileveled equations, it may be necessary to treat the equation as a graphic and insert it into the text after your paper is styled.
In this research we have used a special type of Convolution neural network (CNN) called DenseNet for the purpose of classification. The model not only classifies the data into benign or malignant but also identifies the 8 subclasses present within them.
Convolutional Neural Networks (CNN), which build several layers of neurons in a resilient manner, are one of the most amazing deep learning techniques. They have demonstrated their outstanding generalization abilities on massive data sets containing millions of images. In traditional CNN, each layer except the first one receives the output of the convolutional layer before it. This convolutional layer then creates an output feature map, which is then passed on to the subsequent convolutional layer. As a result, there are "L" direct connections for each layer, one from one to the next.
The problem with the traditional neural networks is when the layer goes deeper that the network increases , the details from input layer to the output layer vanishes or get lost this reduces the effectiveness of the neural network; this is called the vanishing gradient problem .
DenseNet solves this problem by changing the traditional convolution neural network and simplifying the connections between the networks .Each layer in a DenseNet design is connected to every other layer directly, giving rise to the moniker Closely packed CNN . L(L+1)/2 direct connections between 'L' levels.
The type of DenseNet used for our experiment is densenet-
121 architecture. Dense net has various components connectivity, dense blocks , growth rate and bottleneck layers
. The feature maps of previous layers are concatenated instead of summing up. By doing this feature reuse can be done and the un important features are discarded. The layer I receives the inputs of all the layers behind it.
The DenseNet 121 architecture contains the following layers: 1 7×7 Convolution layer ,58 3×3 Convolution layers ,61 1×1 Convolution layers, 4 Average Pooling layers and a 1 Fully Connected Layer. So, the dense net totally contains 120 convolution layers and 4 average pooling layers.
The DenseNet is developed to handle non-microscopic images. To solve this issue , For the first convolutional layer, we employ kernels of 7×7 sizes to identify subtle variations and substance in the picture and extract more significant information. Additionally, to cope with the intricate structure of the histopathological pictures, the kernel size is decreased. The feature map associated with this layer is fixed using an average pooling layer before the completely connected layer, which has a 7×7 kernel size and stride 2. Additionally, rather than using classes of the ImageNet dataset, we set up the SoftMax layer for the 8 classes of histopathology images.
Transfer learning is the process of optimizing CNN models that have already been trained on natural image datasets for use in medical imaging applications. Due to its processing expense, convergence issue, and lack of sufficient high-quality labelled examples, learning from scratch from clinical pictures is sometimes not the most feasible approach.
Pre-trained models have been studied in a growing corpus of studies when there are few learning examples available. We used a pre-trained model on ImageNet to establish the weights of the various network layers in our suggested architecture. Then, using the BreakHis cancer picture dataset, we applied the final layer fine-tuning. The final completely linked layer was updated continually while the ImageNet pre-trained weights were maintained. The network is then tweaked using BreakHis data when the first convolution layer of the network is unfrozen. The benefit of the DenseNet is feature concatenation, which enables us to manage and modify the features at any level without having to compress them. This method enables us to prevent the expansion of parameter
numbers, which lowers the complexity of the training process and gets rid of the overfitting issue.
In the upcoming sections we will see results obtained from the developed model.
-
-
RESULTS AND DISCUSSION
As mentioned in the previous sections the proposed methodology to classify the breast tumors is DenseNet 121 architecture along with a concept of federated learning , various performace metrics such as accuracy, specificity and sensitivity are applied on the training dataset which is 80% of the BreakHis dataset and the 20% remaining is the validation set, while validated against this dataset the global accuracy achieved is 90% which is noticeable percentage in the researches conducted on the classification of breast tumors when compared with various previous works. We compare the proposed model's performance with that of the most potent CNNs in the multi-classification of histological breast cancer images in order to provide an accurate performance evaluation. The conventional CNN LeNet is employed for the handwritten recognizing characters with exceptional accuracy. However, its performance on the histopathology pictures was significantly worse, obtaining only 47% multi-classification precision . In the binary categorization of benign and malignant histological pictures, AlexNet produced an accuracy of detection of 83%. However, it only manages to obtain roughly 80% accuracy for the multi-classification task. Along with this our model ensures that the patients data won't be stored or utilized, this is done by the concept of federated learning in which multiple clients run locally which generates the weights which the sent to the global model where the averaging of weights happen . Our local model runs 8 times in the performed experiment and performance metrics values are recorded in the below
-
CONCLUSION
The convolution neural networks played an important role in the classification purpose , significant changes are noticed in the feature selection process of the histopathology images , the DenseNet which is implemented produces better results when compared to the previous works which is an accuracy of 90%. We also ensured that the private information of the clients aren't used by the global model by implementing the concept of federated learning, in which the global model only uses the weights of the local model and performs averaging of weights to train the global model. From this we can conclude that there is a lot of future scope from our research work in which both confidentiality of the medical data and the
classification of tumors can happen ,and can be very helpful to the medical industries .
REFERENCES
[1] Pritom, A.L., Munshi, M.A.R., Sabab, S.A., Shihab, S., Predicting breast cancer recurrence using effective classification and feature selection technique, in 2016 19th International effect Conference on Computer and Information Technology (ICCIT)(2016), pp. 310-314. [2] Doreswamy and M. U. Salma, "PSO based fast K-means algorithm for feature selection from high dimensional medical data set," in 10th International Conference on Intelligent Systems and control (ISCO) , pp .1-6,2016. [3] Daffa Fajri Riesaputri; Christy Atika Sari; Ignatius Moses Setiadi De Rosal; Eko Hari Rachmawanto.. Classification of Breast Cancer using PNN Classifier based on GLCM Feature Extraction and GMM Segmentation. IEEE, October 2020. [4] Qingfang He ID, Guang Cheng, Huimin Ju,"Parallel heterogeneous eight-class classification model of breast pathology," Institute of Computer Technology, Beijing Union University, Beijing, China,2021. [5] Shubham Sharma, Archit Aggarwal, Tanupriya Choudary, Breast cancer detection using machine learning algorithms, International Conference on Computational Techniques,Electronics and Mechanical System(CTEMS), pp. 114-118, 2018. [6] Bichen Zheng, Sang Won Yoon, Sarah S. Lam, "Breast cancer diagnosis based on feature extraction using a hybrid of K-means and support vector machine algorithms,"Department of Systems Science and Industrial Engineering, State University of New York at Binghamton, Binghamton, NY 13902, United States,pp. 1476- 1482,2014. [7] Cuong Nguyen, Yong Wang, Ha Nam Nguyen, "Random Forest classifier combined with feature selection for breast cancer diagnosis and prognostic,"J. Biomedical Science and Engineering, vol 6, pp. 551- 560,2013. [8] Abhijeet Patil,Dipesh Tamboli,Swati Meena,Deepak Anand ,Amit Sethi Breast Cancer Histopathology Image Classification and Localization using Multiple Instance Learning,2019 5th IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE) [9] Y. Chen, L. Ling, and Q. Huang, Classification of breast tumours in ultrasound using biclustering mining and neural network, in Proc. 9th Int. Congr. Image Signal Process., Biomed. Eng. Informat. (CISP- BMEI), Datong, China, Oct. 2016, pp. 17871791. [10] Majid Nawaz, Adel A. Sewissy, Taysir Hassan A. Soliman, Multi- class breast cancer classification using deep learning convolutional neural network, International Journal of Advanced Computer Science and Application, vol. 9, no. 6, pp. 316-322, 2018. [11] E. Alickovic, A. Subasi Breast cancer diagnosis using GA feature selection and Rotation Forest Neural Comput. Appl., 28 (2015), pp. 753-763. [12] Peter Adebayo I dowu, Kehinde Oladipo Williams, Jeremiah Ademola Balogun and Adeniran I shola Oluwaranti. Breast Cancer Risk Prediction Using Data Mining Classification Techniques, Transactions on Networks andCommunications, Volume 3 No 2, April (2015); pp: 1-11. [13] Lal Hussain, Wajid Aziz, Sharjil Saeed, Saima Rathore, Muhammad Rafique, Automated Breast Cancer Detection using Machine Learning Techniques by Extracting Different Feature Extracting Strategies,2324-9013, IEEE,