경기도 인공지능 개발 과정/Python

[Python] 텐서플로_Pooling

agingcurve 2022. 7. 13. 10:05
반응형

텐서플로 Pooling

  • 풀링 레이어는 플래튼 이후에 사용되는 가중치의 개수를 적게 유지하기 위해 입력으로 사용할 칼럼 수를 조정하는 것이 목적임
  • 맥스풀링의 원리 : (6,6) 사이즈의 이미지가 있다고 할 때, 그 절반인 (3,3) 크기의 사이즈로 이미지를 줄이는 것임
  • 맥스풀링은 유의미한 정보를 남기면서 사이즈를 줄이는것이 목적
In [1]:
import tensorflow as tf
import pandas as pd
In [5]:
# 데이터를 준비하고
(독립, 종속), _ = tf.keras.datasets.mnist.load_data()
독립 = 독립.reshape(60000, 28, 28, 1)
종속 = pd.get_dummies(종속)
print(독립.shape, 종속.shape)
(60000, 28, 28, 1) (60000, 10)
In [8]:
# 모델을 만들고
X = tf.keras.layers.Input(shape=[28, 28, 1])
H = tf.keras.layers.Conv2D(3, kernel_size= 5, activation="swish")(X)
H = tf.keras.layers.MaxPool2D()(H)

H = tf.keras.layers.Conv2D(6, kernel_size=5, activation="swish")(H)
H = tf.keras.layers.MaxPool2D()(H)

H = tf.keras.layers.Flatten()(H)
H = tf.keras.layers.Dense(84, activation="swish")(H)
Y = tf.keras.layers.Dense(10, activation="softmax")(H)
model = tf.keras.models.Model(X, Y)
model.compile(loss = "categorical_crossentropy", metrics="accuracy")
In [9]:
# 모델 확인
model.summary()
Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_3 (InputLayer)        [(None, 28, 28, 1)]       0         
                                                                 
 conv2d_2 (Conv2D)           (None, 24, 24, 3)         78        
                                                                 
 max_pooling2d_2 (MaxPooling  (None, 12, 12, 3)        0         
 2D)                                                             
                                                                 
 conv2d_3 (Conv2D)           (None, 8, 8, 6)           456       
                                                                 
 max_pooling2d_3 (MaxPooling  (None, 4, 4, 6)          0         
 2D)                                                             
                                                                 
 flatten_1 (Flatten)         (None, 96)                0         
                                                                 
 dense_1 (Dense)             (None, 84)                8148      
                                                                 
 dense_2 (Dense)             (None, 10)                850       
                                                                 
=================================================================
Total params: 9,532
Trainable params: 9,532
Non-trainable params: 0
_________________________________________________________________
In [10]:
# 모델을 학습
model.fit(독립, 종속, epochs=10)
Epoch 1/10
1875/1875 [==============================] - 29s 15ms/step - loss: 0.9124 - accuracy: 0.8743
Epoch 2/10
1875/1875 [==============================] - 28s 15ms/step - loss: 0.1346 - accuracy: 0.9598
Epoch 3/10
1875/1875 [==============================] - 27s 15ms/step - loss: 0.1038 - accuracy: 0.9697
Epoch 4/10
1875/1875 [==============================] - 28s 15ms/step - loss: 0.0867 - accuracy: 0.9743
Epoch 5/10
1875/1875 [==============================] - 27s 14ms/step - loss: 0.0744 - accuracy: 0.9778
Epoch 6/10
1875/1875 [==============================] - 27s 15ms/step - loss: 0.0701 - accuracy: 0.9794
Epoch 7/10
1875/1875 [==============================] - 28s 15ms/step - loss: 0.0666 - accuracy: 0.9801
Epoch 8/10
1875/1875 [==============================] - 27s 14ms/step - loss: 0.0640 - accuracy: 0.9808
Epoch 9/10
1875/1875 [==============================] - 27s 15ms/step - loss: 0.0616 - accuracy: 0.9822
Epoch 10/10
1875/1875 [==============================] - 28s 15ms/step - loss: 0.0621 - accuracy: 0.9823
Out[10]:
<keras.callbacks.History at 0x7f9909bb7dd0>