Monday, June 3, 2019

Cloud Computing with Machine Learning for Cancer Diagnosis

Cloud Computing with Machine Learning for Cancer DiagnosisCloud computing with Machine Learning could help us in the early diagnosing of breast crab louseJunaid Ahmad Bhat, Prof. Vinai George and Dr. Bilal MalikAbstract The purpose of this study is to develop tools which could help the clinicians in the primary c be hospitals with the early diagnosis of breast crab louse diagnosis. Breast cancer is one of the leading forms of cancer in developing countries and often gets detected at the lateral stages. The detection of cancer at later stages results non only in pain and agony to the patients but also puts lot of financial burden on the cargongivers. In this roleplay, we be presenting the preliminary results of the project code named BCDM (Breast Cancer Diagnosis using Machine Learning) developed using Matlab. The algorithm developed in this prepare is based on adaptive resonance theory. (Explain the results of this work here ..). The aim of the project is to eventu whollyy run the algorithm on a cloud computer and a clinician at a primary healthcare can use the system for the early diagnosis of the patients using web based interface from anywhere in the world.Keywords accommodative Resonance theory, Breast Cancer Diagnosis, FNAI. IntroductionThe breast cancer is one of the common cancers and ranked second in the world after the lung cancer. (1)This type of cancer also ranked second in northern India. (1)Breast cancer is one of the leading cancers found in Kashmir (1) .Classifying the cells into the malignant and benign is the main goal in the diagnoses of breast cancer and misclassification could cost pain to the patients and extra burden to health care providers. Due to noise in the data, the problem to classify becomes non-trivial and has thus attracted researchers from simple machine learning to improve the classification.(2) Researchers have used diametrical machine learning algorithms to improve the diagnosis of breast cancer. (3) And Neural Netw orks is one of the machine learning algorithms, which has been astray used for diagnosis of breast cancer.In order to achieve the exactness adaptive Resonance theory that is one of the variants of Neural Network been used for anticipation purposes. Neural Network gained importance in 505 till late 60s due to its accuracy and learning capabilities but got diminished in 80s due to its computational cost. With the procession in technology (4) Neural Networks are becoming popular due to their ability to achieve non-linear hypotheses even when infix skylark scale is large (4). This work proposes to use a variant of neural net incomes based on adaptive resonance theory to improve the breast cancer diagnosis. This algorithm has been developed and well-tried in Matlab 2012.has been tested on lot of real life problems that include automated automobile control, for classification purposes and for the detection of intruders in the battlefield.II. accommodative Resonance surmisal (ART)T he Adaptive Resonance Theory (ART) is a neural network architecture that generates suitable weights ( logical argument) by clustering the pattern space. . The motive for adapting ART instead of a conventional neural network is to solve the stability and plasticity problem. (5) ART networks and algorithms keep the plasticity to learn new patterns and prevent the amendment of patterns that it learned earlier.The stable network will not return the previous cluster. The operation of ART works as it accepts an stimulant drug transmitter and classifies it into one of the clusters depending on to which cluster it resembles. If it will not match with any of the category then a new category is created by storing that pattern. When a store pattern is, bring into being that matches the insert transmitter within a specified tolerance that made it to look like the input vector. The pattern will not be modified if it doesnt match the incumbent input pattern within the vigilance parameter. Wi th the help of it the problems associated with stability and plasticity can be resolved. (5) account 4 Art 1Neural Network ArchitectureA. Types of Adaptive Resonance Theory1) Adaptive Resonance Theory 1It is the first neural network of Adaptive Resonance theory. It consists of two seams that cluster the pattern from the input binary vector. It accepts the input in the form of binary values (6).2) Adaptive Resonance Theory 2It is the second type of neural Network of Adaptive Resonance theory .It is complex than that of ART1 network and accepts the values in the form of continuous valued vector. The reason of complexity for ART 2 is that it possesses the normalization combining and noise inhibition alongside it compares the weights needed for the reset mechanism. (6)B. Working of ART 1 Neural NetworkThe art Neural Networks works in the succeeding(a) fashion, which comprises of three layers and each layer has its own role to play.1) Input layer2) Interface layer3) Cluster layerThe pa rameters used in algorithm are asNum = Number of SymptomsM = Clusters as benign ,Malignantbwij =Bottom up weightsTwij = Top down weightsP =Vigilance parameterS = Binary forms of the input symptomsX = activation vector for interfacex =norm of x or sum of the components of xStep 1Initialize ParametersL 1 and 0 Initialize weights0 ij (0) ij (0)=1Step 2While s conduce condition is false, perform stones throw 3 to 14Step 3For each training input do shout 4 to 13Step 4Set Activation of all F2 units to 0Set Activation of F1(a) units to binary forms of Symptoms vectorStep 5Compute the sum of the symptomss = i SiStep 6Send the symptom vector from input layer to interface layerxi = siStep7The cluster node that is not inhibitedIf yj = -1 then yj = bij *xiStep8While reset is true, perform step 9-12Step 9Find J such that yi = yj for all nodes jIf yj = -1 thenAll then odds are inhibited thus cannot be clusteredStep 10Recomputed activation vector x of interface unitXi= si *tjiStep 11Comp ute the sum of the components of vector xx= I XiStep 12Test for reset conditionif x / s Yj = -1 (inhibited node j)Move to step step 8 againif x / s = p then move to next stepStep 13Update the bottom up weights and top up weights asbij (new)=L*xi / L 1 + xand Tji (new)=xiStep 14Test for the stopping conditionif((bij(new_val)==bij(previous_vreeal)))(tij(new_val)==tij(previous_val)))III. Classifying Breast electric cellThe data set for this research was dupen from Mangasarian and Wolberg. This data set was obtained by taking Fine Needle Aspirates (FNA) approach. (7) This data set is available for public in UCI repository. (7) It contains 699 samples of patients consists of two classes 458 as benign cases and 451 malignant cases.The following are the attributes of the databaseSample Code NumberClump Thickness accord of Cell SizeUniformity of Cell ShapeMarginal AdhesionSingle Epithelial Cell SizeBare NucleiBland ChromatinNormal NucleoliMitosisClassWe have taken this data in its origi nal form. This dataset is available in UC Irvine Machine Learning Repository (7)IV. ExperimentOur Experiment consists of four different modules which is further divided and does work in the following sequence as given(p) in the figure 5 below.Figure 5 Modules of the AlgorithmA. Modules of the Experiment1) Pre processingIn our dataset, not all the disports are taking part in the classification process thus we remove patients id feature. Then we left with ten attributes so we separate the feature set from the class values as Xij and Yi.a) Data NormalizationAfter preprocessing stage Normalization of Xij (nine feature vectors) need to perform by using this comparabilityNew_val = (current _val min value) / (Max value min value)Where,New_val = New value after scalingcurrent_val = Current value of the feature vectorMax_val = Maximum value of each feature vectorMinvalue = Minimum value of each feature vectorb) Data ConversionThe new values (New_val) after getting from the previous step are truncated and converted into binary format. Then grouping was done on the base of range the values falling in the range of 0 to 5 assigned as 0. Whereas, values in the range from 5 to 10 are assigned as 1.Then each sample as an input is given to ART1 network for training and examination purpose.2) Recognition StageInitially all components of the input vector were assigned to correct because no sample was applied to the input layer. This sets the other two layers to zero there by disabling all the nerve cells and results in zero output. Since all neurons are at the same stage, thus both neuron has an equal chance to win. The input vector then applied in the recognition layer, at each neuron performs a dot proceeds between the input vector and its weight vector. A neuron that comes with the greatest dot product possesses the weights that most excellent matches input vector. It inhibits all the other outputs from that neuron from that layer. This indicates the recognition layer stores the patterns in the form of weights associated with neurons one for each class.3) Comparison StageIn the recognition layer the network fired passes one screening to the comparison layer when it passes the output signal. The comparison neurons that will fire are the one those receive simultaneously from the input feature vector and the comparison layer excitation vector. If there is a mismatch between these two, few neurons in the comparison layer will fire to the next layer until X got over. This center that the pattern P being feedback is not the one sought and neuron firing in the recognition layer should be inhibited. Then comparison of the symptoms vector and the inner layer vector and if the value is less then vigilance parameter, the network causes reset which causes the firing neuron in the recognition layer to zero and disable it for the current classification.4) Search StageThe classification process finishes if the reset signal is not generated. Otherwise other p atterns were researched to find the correct match. This method continues until both all the stored pattern has been tried or all recognition neurons are inhibited.V. ResultsThe performance of the Algorithm studied is as underThe Training percentage and testing percentage total time taken and the relative efficiency when vigilance parameter is 0.5 is given by the chart.Figure 6 The classification performance on Vigilance parameter 0.5The efficiency of the Network with vigilance parameter 0.7 on different percentage of training and testing sets given in figure 7. And on taking the vigilance parameter as 0.7 but on different percentage of training and testing dataset we got better efficiency than that of in figure 7 as shown in figure 8.Figure 7 The Classification performance on Epoch 0.7Figure 8 Calculation of Efficiency on different proportion of dataThe efficiency of the Network with vigilance parameter 0.9 on different percentage of training and testing sets given as underFigure 9 The Efficiency of the Network on Vigilance Parameter 0.9The Maximum and Minimum time for training the Network on different tolerance factors is in the table asTable 1 Calculation of Training timeVI. ConclusionIn this paper, we evaluated the adaptive resonance theory for the diagnosis of breast cancer using Wisconsin as data set. Several tests has been taken on different proportion of training and testing dataset and we concluded that by taking the vigilance parameter as 0.5 and taking the ratio of data as 90% for training and 10 % for testing we could achieve the better results.Although we have taken into account all the parameters in the further scope of research, we use the feature selection process so that we can reduce the time and improve the accuracy. In addition to that, we take the dataset from the local hospital so that we use that for the benefit of the society.ReferencesJournal of Cancer Research and Therapeutics. Afroz, Fir, et al. 2012, Vol. 8.Heart Disease Diagnosis using Support Vector. Shashikant Ghumbre, Chetan Patil,Ashok Ghatol. Pattaya International conclave on Computer Science and Information Technology, Dec. 2011.Stefan Conrady, Dr. Lionel Jouffe. Breast Cancer Diagnostics with Bayesian Networks. s.l. Bayesia, 2013.DONG, Yiping. A Study on Hardware Design for High cognitive process Artificial Neural Network by using FPGA and NoC . s.l. Waseda University Doctoral Dissertation, July -2011.S N Sivanandan, S Sumathi , S N Deepa. Introduction to Neural Network and Matlab 6.0. s.l. Tata Mc-Graw -Hill, 2006. evaluation of Three Neural Network Models using Wisconsin Breast Cancer. K. Mumtaz, S. A. Sheriff,K. Duraiswamy.UCL Wisconsin data set. Online Cited 30 10 2014. http//archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.