728x90
Kaggle WM-811K dataset으로 wafer의 defect 불량을 CNN으로 검출해보기.¶
In [1]:
import os
from os.path import join
import numpy as np
import pandas as pd
import tensorflow as tf
import keras
from keras import layers, Input, models
from keras.utils import to_categorical
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
print(os.listdir("../project(2)"))
import warnings
warnings.filterwarnings("ignore")
Using TensorFlow backend.
['.ipynb_checkpoints', '.vscode', '19 Holyday Inn Hotel.jpg', 'Adain_transfer.ipynb', 'agumented.jpg', 'art', 'Blur.py', 'bobPool0.jpg', 'bobPool1.jpg', 'bobPool2.jpg', 'bobPool3.jpg', 'cat&dog.ipynb', 'cat&dog_set', 'chromedriver.exe', 'CNN_networks', 'Color.py', 'Contrast.py', 'Crawled_Img(Tesla)', 'Crawling_Naver.py', 'Crawling_Pinterst.py', 'Data03.csv', 'Data04.csv', 'dataset', 'datasets', 'decoder.pth', 'eda_nlp', 'Geometry.py', 'ImgCorruptlike.py', 'labelpix', 'LSWMD.pkl', 'me.jpg', 'Neural_style_transfer.ipynb', 'night_star.jpg', 'night_style.jpg', 'Plotly시각화.ipynb', 'project - 복사본.ui', 'project.py', 'project.spec', 'project.ui', 'pul.jpg', 'pyqy5.py', 'requirements.txt', 'ResNet50V2.ipynb', 's.jpg', 'saeggi(2)_test.xml', 'samsung.csv', 'sansu.jpg', 'Save.py', 'seoultech.jpg', 'seoultech2.jpg', 'siva', 'src.txt', 'style.jpg', 'style2.jpg', 'style_exam.jpg', 'stylized_campus-image.png', 'stylized_image0.png', 'stylized_image1.png', 'stylized_image2.png', 'stylized_image3.png', 'stylized_image4.png', 'stylized_suck.png', 'stylized_suck1-image.png', 'stylized_suck3-image.png', 'stylized_suck4-image.png', 't.jpg', 'test.jpg', 'test_aug', 'trash.py', 'Untitled.ipynb', 'vgg_normalised.pth', 'wafer_defect탐지.ipynb', 'weather.csv', 'weather_100.csv', 'weather_10000.csv', 'zic.mp4', '__pycache__', '구글이미지크롤링(미완성).py', '네이버이미지크롤링(데모).py', '데이터셋생성연습.ipynb', '데이터시각화.ipynb', '삼성과목입력.py', '수강신청.py', '암거나하장.py', '웹캠샘플링.py', '컴비수업1.ipynb', '크롤링공부.py']
In [2]:
df=pd.read_pickle("../project(2)/LSWMD.pkl")
df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 811457 entries, 0 to 811456 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 waferMap 811457 non-null object 1 dieSize 811457 non-null float64 2 lotName 811457 non-null object 3 waferIndex 811457 non-null float64 4 trianTestLabel 811457 non-null object 5 failureType 811457 non-null object dtypes: float64(2), object(4) memory usage: 37.1+ MB
- 데이터 세트는 811,457개의 웨이퍼 맵과 함께 웨이퍼 다이 크기, 로트 이름 및 웨이퍼 인덱스와 같은 추가 정보로 구성됨
In [3]:
df.head()
Out[3]:
waferMap | dieSize | lotName | waferIndex | trianTestLabel | failureType | |
---|---|---|---|---|---|---|
0 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... | 1683.0 | lot1 | 1.0 | [[Training]] | [[none]] |
1 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... | 1683.0 | lot1 | 2.0 | [[Training]] | [[none]] |
2 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... | 1683.0 | lot1 | 3.0 | [[Training]] | [[none]] |
3 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... | 1683.0 | lot1 | 4.0 | [[Training]] | [[none]] |
4 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... | 1683.0 | lot1 | 5.0 | [[Training]] | [[none]] |
In [4]:
import matplotlib.pyplot as plt
%matplotlib inline
uni_Index=np.unique(df.waferIndex, return_counts=True)
plt.bar(uni_Index[0],uni_Index[1], color='gold', align='center', alpha=0.5)
plt.title(" wafer Index distribution")
plt.xlabel("index #")
plt.ylabel("frequency")
plt.xlim(0,26)
plt.ylim(30000,34000)
plt.show()
웨이퍼 인덱스별로 데이터의 분포가 고르지 않다.
웨이퍼 인덱스 기능은 분류에 필요하지 않기 때문에 feature engineering이 필요하다
In [5]:
df = df.drop(['waferIndex'], axis = 1)
웨이퍼 맵 열에서는 많은 정보를 얻을 수 없지만 인스턴스별로 다이 크기가 다르다.
웨이퍼 맵 차원 검사를 위한 새로운 변수 'waferMapDim'을 만든다.
In [6]:
def find_dim(x):
dim0=np.size(x,axis=0)
dim1=np.size(x,axis=1)
return dim0,dim1
df['waferMapDim']=df.waferMap.apply(find_dim)
df.sample(10)
Out[6]:
waferMap | dieSize | lotName | trianTestLabel | failureType | waferMapDim | |
---|---|---|---|---|---|---|
398224 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... | 1338.0 | lot23837 | [] | [] | (41, 42) |
264380 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... | 2466.0 | lot16372 | [] | [] | (56, 57) |
85292 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2,... | 939.0 | lot6110 | [] | [] | (39, 31) |
534024 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 1, 1,... | 710.0 | lot33381 | [] | [] | (32, 29) |
777611 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... | 1414.0 | lot46169 | [[Test]] | [[none]] | (42, 44) |
142591 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2,... | 710.0 | lot9220 | [] | [] | (32, 29) |
465281 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2,... | 712.0 | lot28622 | [] | [] | (32, 29) |
716008 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... | 1485.0 | lot43668 | [[Test]] | [[none]] | (45, 42) |
745085 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... | 822.0 | lot44847 | [[Test]] | [[none]] | (22, 50) |
76132 | [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,... | 846.0 | lot5634 | [] | [] | (33, 33) |
특정 해상도의 wafer map 가져오기¶
- 특정 해상도의의 서브 웨이퍼를 불러온다.
- 웨이퍼가 (26, 26) 해상도을 갖는다. 웨이퍼&array를 케이스 레이블로 정렬한다. 일부 웨이퍼에 Null 라벨이 있지만 무시한다.
In [7]:
sub_df = df.loc[df['waferMapDim'] == (26, 26)]
sub_wafer = sub_df['waferMap'].values
sw = np.ones((1, 26, 26))
label = list()
for i in range(len(sub_df)):
# skip null label
if len(sub_df.iloc[i,:]['failureType']) == 0:
continue
sw = np.concatenate((sw, sub_df.iloc[i,:]['waferMap'].reshape(1, 26, 26)))
label.append(sub_df.iloc[i,:]['failureType'][0][0])
In [8]:
x = sw[1:]
y = np.array(label).reshape((-1,1))
In [9]:
# check dimension
print('x shape : {}, y shape : {}'.format(x.shape, y.shape))
x shape : (14366, 26, 26), y shape : (14366, 1)
In [10]:
# plot 1st data
plt.imshow(x[0])
plt.show()
# check faulty case
print('Faulty case : {} '.format(y[0]))
Faulty case : ['none']
- 채널 치수를 확장하여 2D Convolution Autoencoder를 사용
In [11]:
x = x.reshape((-1, 26, 26, 1))
- 결함 있는 사례 목록을 작성하고 클래스가 imbalanced되는 방식을 확인
In [12]:
faulty_case = np.unique(y)
print('Faulty case list : {}'.format(faulty_case))
Faulty case list : ['Center' 'Donut' 'Edge-Loc' 'Edge-Ring' 'Loc' 'Near-full' 'Random' 'Scratch' 'none']
In [13]:
for f in faulty_case :
print('{} : {}'.format(f, len(y[y==f])))
Center : 90 Donut : 1 Edge-Loc : 296 Edge-Ring : 31 Loc : 297 Near-full : 16 Random : 74 Scratch : 72 none : 13489
웨이퍼 데이터의 각 픽셀에는 웨이퍼가 아닌 0, 1: 정상, 2: 결함을 나타내는 범주형 변수가 있다. 원-핫 인코딩된 단일 범주 데이터를 채널로 사용하여 추가 차원을 확장한다.
In [14]:
# 정량적 변수들을 각 채널에 따라 원핫인코딩
new_x = np.zeros((len(x), 26, 26, 3))
for w in range(len(x)):
for i in range(26):
for j in range(26):
new_x[w, i, j, int(x[w, i, j])] = 1
In [15]:
#check new x dimension
new_x.shape
Out[15]:
(14366, 26, 26, 3)
데이터 증강위한 Convolutional Autoencoder .¶
- 클래스 뷸균형 문제를 해결하기 위해 데이터 증강이 필요함.
- 웨이퍼 데이터는 이미지 데이터라서 Convolutional Autoencoder써야한다.
In [16]:
# parameter
epoch=15
batch_size=512
In [17]:
input_shape = (26, 26, 3)
input_tensor = Input(input_shape)
encode = layers.Conv2D(64, (3,3), padding='same', activation='relu')(input_tensor)
latent_vector = layers.MaxPool2D()(encode)
#디코더
decode_layer_1 = layers.Conv2DTranspose(64, (3,3), padding='same', activation='relu')
decode_layer_2 = layers.UpSampling2D()
output_tensor = layers.Conv2DTranspose(3, (3,3), padding='same', activation='sigmoid')
# 디코더 레이어 연결
decode = decode_layer_1(latent_vector)
decode = decode_layer_2(decode)
ae = models.Model(input_tensor, output_tensor(decode))
ae.compile(optimizer='Adam', loss='mse')
In [18]:
ae.summary()
Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 26, 26, 3)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 26, 26, 64) 1792 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 13, 13, 64) 0 _________________________________________________________________ conv2d_transpose (Conv2DTran (None, 13, 13, 64) 36928 _________________________________________________________________ up_sampling2d (UpSampling2D) (None, 26, 26, 64) 0 _________________________________________________________________ conv2d_transpose_1 (Conv2DTr (None, 26, 26, 3) 1731 ================================================================= Total params: 40,451 Trainable params: 40,451 Non-trainable params: 0 _________________________________________________________________
In [19]:
ae.fit(new_x,new_x, batch_size=batch_size,epochs=epoch,verbose=1)
Epoch 1/15 29/29 [==============================] - 5s 24ms/step - loss: 0.1597 Epoch 2/15 29/29 [==============================] - 1s 23ms/step - loss: 0.0858 Epoch 3/15 29/29 [==============================] - 1s 22ms/step - loss: 0.0665 Epoch 4/15 29/29 [==============================] - 1s 23ms/step - loss: 0.0524 Epoch 5/15 29/29 [==============================] - 1s 22ms/step - loss: 0.0415 Epoch 6/15 29/29 [==============================] - 1s 23ms/step - loss: 0.0318 Epoch 7/15 29/29 [==============================] - 1s 23ms/step - loss: 0.0245 Epoch 8/15 29/29 [==============================] - 1s 23ms/step - loss: 0.0200 Epoch 9/15 29/29 [==============================] - 1s 23ms/step - loss: 0.0169 Epoch 10/15 29/29 [==============================] - 1s 22ms/step - loss: 0.0148 Epoch 11/15 29/29 [==============================] - 1s 23ms/step - loss: 0.0131 Epoch 12/15 29/29 [==============================] - 1s 22ms/step - loss: 0.0117 Epoch 13/15 29/29 [==============================] - 1s 23ms/step - loss: 0.0106 Epoch 14/15 29/29 [==============================] - 1s 22ms/step - loss: 0.0096 Epoch 15/15 29/29 [==============================] - 1s 22ms/step - loss: 0.0088
Out[19]:
<tensorflow.python.keras.callbacks.History at 0x241e0723b88>
In [20]:
# 오토인코더 모델 층의 일부로 인코더 모델로 만든다
encoder = models.Model(input_tensor, latent_vector)
In [21]:
# 오토인코더 모델 층의 일부로 디코더 모델을 만든다
decoder_input = Input((13, 13, 64))
decode = decode_layer_1(decoder_input)
decode = decode_layer_2(decode)
decoder = models.Model(decoder_input, output_tensor(decode))
In [22]:
# 기존의 결함이 있는 wafer를 엔코딩
encoded_x = encoder.predict(new_x)
In [23]:
# 노이즈 첨가
noised_encoded_x = encoded_x + np.random.normal(loc=0, scale=0.1, size = (len(encoded_x), 13, 13, 64))
In [24]:
# 기존의 결함 있는 wafer data 시각화
plt.imshow(np.argmax(new_x[3], axis=2))
Out[24]:
<matplotlib.image.AxesImage at 0x2427ee781c8>
In [25]:
# 노이즈 첨가된 웨이퍼 데이터 시각화
noised_gen_x = np.argmax(decoder.predict(noised_encoded_x), axis=3)
plt.imshow(noised_gen_x[3])
Out[25]:
<matplotlib.image.AxesImage at 0x2427a1fc048>
데이터 증강¶
- 각 케이스에 대해 2000개의 데이터를 증강
In [26]:
def gen_data(wafer, label):
encoded_x = encoder.predict(wafer)
gen_x = np.zeros((1, 26, 26, 3))
for i in range((2000//len(wafer)) + 1):
noised_encoded_x = encoded_x + np.random.normal(loc=0, scale=0.1, size = (len(encoded_x), 13, 13, 64))
noised_gen_x = decoder.predict(noised_encoded_x)
gen_x = np.concatenate((gen_x, noised_gen_x), axis=0)
gen_y = np.full((len(gen_x), 1), label)
return gen_x[1:], gen_y[1:]
In [27]:
# 모든 faulty 데이터를 증강
for f in faulty_case :
# skip none case
if f == 'none' :
continue
gen_x, gen_y = gen_data(new_x[np.where(y==f)[0]], f)
new_x = np.concatenate((new_x, gen_x), axis=0)
y = np.concatenate((y, gen_y))
In [59]:
print('new_x shape 만든 후 : {}, new_y shape : {}'.format(new_x.shape, y.shape))
new_x shape 만든 후 : (30624, 26, 26, 3), new_y shape : (30707, 1)
In [29]:
for f in faulty_case :
print('{} : {}'.format(f, len(y[y==f])))
Center : 2160 Donut : 2002 Edge-Loc : 2368 Edge-Ring : 2046 Loc : 2376 Near-full : 2032 Random : 2146 Scratch : 2088 none : 13489
In [30]:
# 대체하지 않고 인덱스 선택
none_idx = np.where(y=='none')[0][np.random.choice(len(np.where(y=='none')[0]), size=83, replace=False)]
In [31]:
# 지정한 인덱스 데이터 제거
new_x = np.delete(new_x, none_idx, axis=0)
new_y = np.delete(y, none_idx, axis=0)
In [58]:
print('"none" class 제거 후의 new_x shape : {}, new_y shape : {}'.format(new_x.shape, new_y.shape))
"none" class 제거 후의 new_x shape : (30624, 26, 26, 3), new_y shape : (30624, 9)
In [33]:
for f in faulty_case :
print('{} : {}'.format(f, len(new_y[new_y==f])))
Center : 2160 Donut : 2002 Edge-Loc : 2368 Edge-Ring : 2046 Loc : 2376 Near-full : 2032 Random : 2146 Scratch : 2088 none : 13406
In [34]:
# 문자 라벨을 정량적 라벨로 만든다
for i, l in enumerate(faulty_case):
new_y[new_y==l] = i
# 원-핫인코딩
new_y = to_categorical(new_y)
In [35]:
# train, test 데이터 분리
x_train, x_test, y_train, y_test = train_test_split(new_x, new_y,
test_size=0.33,
random_state=2019)
In [36]:
print('Train x : {}, y : {}'.format(x_train.shape, y_train.shape))
print('Test x: {}, y : {}'.format(x_test.shape, y_test.shape))
Train x : (20518, 26, 26, 3), y : (20518, 9) Test x: (10106, 26, 26, 3), y : (10106, 9)
In [45]:
def create_model():
input_shape = (26, 26, 3)
input_tensor = Input(input_shape)
conv_1 = layers.Conv2D(16, (3,3), activation='relu', padding='same')(input_tensor)
conv_2 = layers.Conv2D(64, (3,3), activation='relu', padding='same')(conv_1)
conv_3 = layers.Conv2D(128, (3,3), activation='relu', padding='same')(conv_2)
flat = layers.Flatten()(conv_3)
dense_1 = layers.Dense(512, activation='relu')(flat)
dense_2 = layers.Dense(128, activation='relu')(dense_1)
output_tensor = layers.Dense(9, activation='softmax')(dense_2)
model = models.Model(input_tensor, output_tensor)
model.compile(optimizer='Adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
교차검증 모델 생성¶
- sklearn KFold Cross validation 이용
In [48]:
# Make keras model to sklearn classifier.
model = KerasClassifier(build_fn=create_model, epochs=10, batch_size=1024, verbose=2)
# 3-Fold Crossvalidation
kfold = KFold(n_splits=3, shuffle=True, random_state=2019)
results = cross_val_score(model, x_train, y_train, cv=kfold)
# Check 3-fold model's mean accuracy
print('Simple CNN Cross validation score : {:.4f}'.format(np.mean(results)))
Epoch 1/10 14/14 - 2s - loss: 2.0356 - accuracy: 0.4588 Epoch 2/10 14/14 - 2s - loss: 0.8517 - accuracy: 0.7171 Epoch 3/10 14/14 - 2s - loss: 0.5793 - accuracy: 0.8134 Epoch 4/10 14/14 - 2s - loss: 0.2641 - accuracy: 0.9093 Epoch 5/10 14/14 - 2s - loss: 0.1137 - accuracy: 0.9674 Epoch 6/10 14/14 - 2s - loss: 0.0507 - accuracy: 0.9871 Epoch 7/10 14/14 - 2s - loss: 0.0290 - accuracy: 0.9938 Epoch 8/10 14/14 - 2s - loss: 0.0173 - accuracy: 0.9960 Epoch 9/10 14/14 - 2s - loss: 0.0102 - accuracy: 0.9982 Epoch 10/10 14/14 - 2s - loss: 0.0084 - accuracy: 0.9987 7/7 - 0s - loss: 0.0346 - accuracy: 0.9904 Epoch 1/10 14/14 - 2s - loss: 1.8552 - accuracy: 0.5075 Epoch 2/10 14/14 - 2s - loss: 0.6009 - accuracy: 0.7980 Epoch 3/10 14/14 - 2s - loss: 0.3816 - accuracy: 0.8763 Epoch 4/10 14/14 - 2s - loss: 0.2012 - accuracy: 0.9353 Epoch 5/10 14/14 - 2s - loss: 0.0930 - accuracy: 0.9740 Epoch 6/10 14/14 - 2s - loss: 0.0489 - accuracy: 0.9865 Epoch 7/10 14/14 - 2s - loss: 0.0268 - accuracy: 0.9936 Epoch 8/10 14/14 - 2s - loss: 0.0156 - accuracy: 0.9971 Epoch 9/10 14/14 - 2s - loss: 0.0104 - accuracy: 0.9981 Epoch 10/10 14/14 - 2s - loss: 0.0068 - accuracy: 0.9989 7/7 - 0s - loss: 0.0302 - accuracy: 0.9923 Epoch 1/10 14/14 - 2s - loss: 2.1718 - accuracy: 0.4654 Epoch 2/10 14/14 - 2s - loss: 0.8183 - accuracy: 0.7235 Epoch 3/10 14/14 - 2s - loss: 0.4146 - accuracy: 0.8510 Epoch 4/10 14/14 - 2s - loss: 0.3472 - accuracy: 0.9009 Epoch 5/10 14/14 - 2s - loss: 0.1561 - accuracy: 0.9558 Epoch 6/10 14/14 - 2s - loss: 0.0796 - accuracy: 0.9794 Epoch 7/10 14/14 - 2s - loss: 0.0451 - accuracy: 0.9894 Epoch 8/10 14/14 - 2s - loss: 0.0333 - accuracy: 0.9921 Epoch 9/10 14/14 - 2s - loss: 0.0209 - accuracy: 0.9955 Epoch 10/10 14/14 - 2s - loss: 0.0165 - accuracy: 0.9965 7/7 - 0s - loss: 0.0314 - accuracy: 0.9901 Simple CNN Cross validation score : 0.9909
In [50]:
history = model.fit(x_train, y_train,validation_data=(x_test, y_test),epochs=epoch,batch_size=batch_size)
Epoch 1/15 41/41 - 71s - loss: 1.3179 - accuracy: 0.6493 - val_loss: 0.5494 - val_accuracy: 0.7907 Epoch 2/15 41/41 - 47s - loss: 0.3257 - accuracy: 0.8911 - val_loss: 0.1759 - val_accuracy: 0.9370 Epoch 3/15 41/41 - 47s - loss: 0.0729 - accuracy: 0.9803 - val_loss: 0.0612 - val_accuracy: 0.9819 Epoch 4/15 41/41 - 50s - loss: 0.0261 - accuracy: 0.9941 - val_loss: 0.0342 - val_accuracy: 0.9892 Epoch 5/15 41/41 - 50s - loss: 0.0155 - accuracy: 0.9968 - val_loss: 0.0294 - val_accuracy: 0.9916 Epoch 6/15 41/41 - 45s - loss: 0.0106 - accuracy: 0.9983 - val_loss: 0.0268 - val_accuracy: 0.9913 Epoch 7/15 41/41 - 45s - loss: 0.0070 - accuracy: 0.9987 - val_loss: 0.0267 - val_accuracy: 0.9917 Epoch 8/15 41/41 - 45s - loss: 0.0052 - accuracy: 0.9989 - val_loss: 0.0358 - val_accuracy: 0.9905 Epoch 9/15 41/41 - 45s - loss: 0.0054 - accuracy: 0.9988 - val_loss: 0.0332 - val_accuracy: 0.9914 Epoch 10/15 41/41 - 45s - loss: 0.0127 - accuracy: 0.9985 - val_loss: 0.0260 - val_accuracy: 0.9930 Epoch 11/15 41/41 - 45s - loss: 0.0049 - accuracy: 0.9990 - val_loss: 0.0268 - val_accuracy: 0.9901 Epoch 12/15 41/41 - 45s - loss: 0.0039 - accuracy: 0.9994 - val_loss: 0.0476 - val_accuracy: 0.9890 Epoch 13/15 41/41 - 45s - loss: 0.4950 - accuracy: 0.8910 - val_loss: 0.1404 - val_accuracy: 0.9473 Epoch 14/15 41/41 - 48s - loss: 0.0441 - accuracy: 0.9881 - val_loss: 0.0305 - val_accuracy: 0.9907 Epoch 15/15 41/41 - 49s - loss: 0.0112 - accuracy: 0.9976 - val_loss: 0.0219 - val_accuracy: 0.9938
In [51]:
score = model.score(x_test, y_test)
#print('Test Loss:', score[0])
#print('Test accuracy:', score[1])
print('Testing Accuracy:',score)
10/10 - 48s - loss: 0.0219 - accuracy: 0.9938 Testing Accuracy: 0.9937660694122314
In [55]:
def plot_model__hist(hist):
plt.figure(figsize=(6,6))
plt.style.use("ggplot")
plt.plot(hist.history['loss'], color='b', label="Training loss")
plt.plot(hist.history['val_loss'], color='r', label="Validation loss")
plt.legend()
plt.show()
plt.figure()
plt.figure(figsize=(6,6))
plt.style.use("ggplot")
plt.plot(hist.history['accuracy'], color='b', label="Training accuracy")
plt.plot(hist.history['val_accuracy'], color='r',label="Validation accuracy")
plt.legend(loc = "lower right")
plt.show()
In [57]:
plot_model__hist(history)
print("wafer defect 탐지 모델의 정확도: {:5.2f}%".format(100*round(score,2)))
<Figure size 432x288 with 0 Axes>
wafer defect 탐지 모델의 정확도: 99.00%
728x90
'Project' 카테고리의 다른 글
인공지능 경진대회 우수상 (0) | 2021.11.17 |
---|---|
인공지능 경진대회 - Kaggle Cat&Dog Dataset 이진분류 (0) | 2021.11.17 |
딥러닝용 데이터셋 생성기 만들기 - labelpix YOLO Bounding BOX 라벨링 오류코드 해결 (0) | 2021.06.30 |