Menü Kapat

Beyin Tümörü MRI CNN

Amaç

MRI taramalarında beyin tümörlerini tespit etmek için Evrişimli Sinir Ağlarını kullanımı. Umarım yapılan bu çalışma gelecekte tıbbi görüntüleme teşhisini otomatik hale getirilebilir veya en azından doktorlara ikinci bir görüş sağlayabilir.

Girdiler ve Çıktılar

Girdi [1]:
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D, Conv3D, BatchNormalization
from keras import backend as K
import os
from PIL import Image
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import pandas as pd
 
TensorFlow backend kullanımı.
Girdi [2]:
os.listdir('../input/brain-mri-images-for-brain-tumor-detection')
Çıktı [2]:
['no', 'brain_tumor_dataset', 'yes']
Girdi [3]:
enc = OneHotEncoder()
enc.fit([[0], [1]]) 
def names(number):
    if(number == 0):
        return 'Tumor'
    else:
        return 'Normal'
 
/opt/conda/lib/python3.6/site-packages/sklearn/preprocessing/_encoders.py:415: FutureWarning: The handling of integer data will change in version 0.22. Currently, the categories are determined based on the range [0, max(values)], while in the future they will be determined based on the unique values.
If you want the future behaviour and silence this warning, you can specify "categories='auto'".
In case you used a LabelEncoder before this OneHotEncoder to convert the categories to integers, then you can now use the OneHotEncoder directly.
  warnings.warn(msg, FutureWarning)
Girdi [4]:
data = []
paths = []
ans = []
for r, d, f in os.walk(r'../input/brain-mri-images-for-brain-tumor-detection/yes'):
    for file in f:
        if '.jpg' in file:
            paths.append(os.path.join(r, file))

for path in paths:
    img = Image.open(path)
    x = img.resize((128,128))
    x = np.array(x)
    if(x.shape == (128,128,3)):
        data.append(np.array(x))
        ans.append(enc.transform([[0]]).toarray())
Girdi [5]:
paths = []
for r, d, f in os.walk(r"../input/brain-mri-images-for-brain-tumor-detection/no"):
    for file in f:
        if '.jpg' in file:
            paths.append(os.path.join(r, file))

for path in paths:
    img = Image.open(path)
    x = img.resize((128,128))
    x = np.array(x)
    if(x.shape == (128,128,3)):
        data.append(np.array(x))
        ans.append(enc.transform([[1]]).toarray())
Girdi [6]:
data = np.array(data)
data.shape
Çıktı [6]:
(139, 128, 128, 3)
Girdi [7]:
ans = np.array(ans)
ans = ans.reshape(139,2)
Girdi [8]:
model = Sequential()

model.add(Conv2D(32, kernel_size=(2, 2), input_shape=(128, 128, 3), padding = 'Same'))
model.add(Conv2D(32, kernel_size=(2, 2),  activation ='selu', padding = 'Same'))


model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(64, kernel_size = (2,2), activation ='selu', padding = 'Same'))
model.add(Conv2D(64, kernel_size = (2,2), activation ='selu', padding = 'Same'))

model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model.add(Dropout(0.25))

model.add(Flatten())

model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))

model.compile(loss = "categorical_crossentropy", optimizer='Adamax')
print(model.summary())
 
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 128, 128, 32)      416       
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 128, 128, 32)      4128      
_________________________________________________________________
batch_normalization_1 (Batch (None, 128, 128, 32)      128       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 64, 64, 32)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 64, 64, 32)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 64, 64, 64)        8256      
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 64, 64, 64)        16448     
_________________________________________________________________
batch_normalization_2 (Batch (None, 64, 64, 64)        256       
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 32, 32, 64)        0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 32, 32, 64)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 65536)             0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               33554944  
_________________________________________________________________
dropout_3 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 2)                 1026      
=================================================================
Total params: 33,585,602
Trainable params: 33,585,410
Non-trainable params: 192
_________________________________________________________________
None
Girdi [9]:
x_train,x_test,y_train,y_test = train_test_split(data, ans, test_size=0.2, shuffle=True, random_state=69)
Girdi [10]:
history = model.fit(x_train, y_train, epochs=30, batch_size=40, verbose=1,validation_data=(x_test, y_test))
 
Train on 111 samples, validate on 28 samples
Epoch 1/30
111/111 [==============================] - 5s 49ms/step - loss: 97.9745 - val_loss: 277.9749
Epoch 2/30
111/111 [==============================] - 0s 2ms/step - loss: 86.0070 - val_loss: 161.2324
Epoch 3/30
111/111 [==============================] - 0s 2ms/step - loss: 44.8761 - val_loss: 22.8152
Epoch 4/30
111/111 [==============================] - 0s 2ms/step - loss: 22.8710 - val_loss: 13.1432
Epoch 5/30
111/111 [==============================] - 0s 2ms/step - loss: 12.6436 - val_loss: 25.8424
Epoch 6/30
111/111 [==============================] - 0s 2ms/step - loss: 8.0254 - val_loss: 15.9823
Epoch 7/30
111/111 [==============================] - 0s 2ms/step - loss: 5.6897 - val_loss: 6.6303
Epoch 8/30
111/111 [==============================] - 0s 2ms/step - loss: 2.4485 - val_loss: 4.8793
Epoch 9/30
111/111 [==============================] - 0s 2ms/step - loss: 0.9813 - val_loss: 4.9282
Epoch 10/30
111/111 [==============================] - 0s 2ms/step - loss: 1.8591 - val_loss: 5.8082
Epoch 11/30
111/111 [==============================] - 0s 2ms/step - loss: 1.8579 - val_loss: 4.9097
Epoch 12/30
111/111 [==============================] - 0s 2ms/step - loss: 0.9552 - val_loss: 4.0293
Epoch 13/30
111/111 [==============================] - 0s 2ms/step - loss: 0.6528 - val_loss: 3.9330
Epoch 14/30
111/111 [==============================] - 0s 2ms/step - loss: 0.4592 - val_loss: 4.3758
Epoch 15/30
111/111 [==============================] - 0s 2ms/step - loss: 0.3196 - val_loss: 5.1238
Epoch 16/30
111/111 [==============================] - 0s 2ms/step - loss: 0.0131 - val_loss: 5.3396
Epoch 17/30
111/111 [==============================] - 0s 2ms/step - loss: 0.1137 - val_loss: 4.8185
Epoch 18/30
111/111 [==============================] - 0s 2ms/step - loss: 0.1940 - val_loss: 3.8550
Epoch 19/30
111/111 [==============================] - 0s 2ms/step - loss: 0.1083 - val_loss: 3.0845
Epoch 20/30
111/111 [==============================] - 0s 2ms/step - loss: 0.0706 - val_loss: 3.0981
Epoch 21/30
111/111 [==============================] - 0s 2ms/step - loss: 0.1765 - val_loss: 3.4054
Epoch 22/30
111/111 [==============================] - 0s 2ms/step - loss: 0.3862 - val_loss: 3.5916
Epoch 23/30
111/111 [==============================] - 0s 2ms/step - loss: 0.0132 - val_loss: 3.7866
Epoch 24/30
111/111 [==============================] - 0s 2ms/step - loss: 0.0541 - val_loss: 3.7140
Epoch 25/30
111/111 [==============================] - 0s 2ms/step - loss: 0.1772 - val_loss: 2.8360
Epoch 26/30
111/111 [==============================] - 0s 2ms/step - loss: 0.0022 - val_loss: 2.1522
Epoch 27/30
111/111 [==============================] - 0s 2ms/step - loss: 6.7226e-04 - val_loss: 2.1731
Epoch 28/30
111/111 [==============================] - 0s 2ms/step - loss: 0.1826 - val_loss: 2.2111
Epoch 29/30
111/111 [==============================] - 0s 2ms/step - loss: 0.0385 - val_loss: 2.5520
Epoch 30/30
111/111 [==============================] - 0s 2ms/step - loss: 0.0615 - val_loss: 3.0602
Girdi [11]:
# Plot training & validation loss values
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Test', 'Validation'], loc='upper right')
plt.show()
 
Girdi [12]:
img = Image.open(r"../input/brain-mri-images-for-brain-tumor-detection/no/N17.jpg")
x = np.array(img.resize((128,128)))
x = x.reshape(1,128,128,3)
answ = model.predict_on_batch(x)
classification = np.where(answ == np.amax(answ))[1][0]
imshow(img)
print(str(answ[0][classification]*100) + '% Confidence This Is ' + names(classification))
 
100.0% Bu Normal
 
Girdi [13]:
img = Image.open(r"../input/brain-mri-images-for-brain-tumor-detection/yes/Y3.jpg")
x = np.array(img.resize((128,128)))
x = x.reshape(1,128,128,3)
answ = model.predict_on_batch(x)
classification = np.where(answ == np.amax(answ))[1][0]
imshow(img)
print(str(answ[0][classification]*100) + '% Confidence This Is A ' + names(classification))
 
100.0% Bu Bir Tümör
 

Sonuçlar

Veri kümesi

Kaggle Notebook

Jupyter Notebook

tr_TRTurkish