7. 합성곱 신경망(CNN)

이번장에서 배운 내용

  • 합성곱 계층(Convolutional Layer)
  • 풀링 계층(Pooling Layer)
  • 위 두 계층은 im2col(이미지를 행렬로 전개하는 함수) 을 이용하면 효율적으로 구현할 수 있다.
  • CNN을 시각화해보면 계층이 깊어질수록 고급 정보가 추출되는 모습을 확인할 수 있다.
  • 대표적인 CNN에는 LeNet과 AlexNet이 있다.

전체 구조

합성곱(1)

  • CNN에서는 합성곱 계층(Convolutional Layer)풀링 계층(Pooling Layer) 가 추가된다.

합성곱 계층

완전연결 계층의 문제점

  • 완전연결 계층의 문제점은 ‘데이터의 형상이 무시’ 된다는 것
    • 입력 데이터가 이미지인 경우, 이미지는 통상 세로가로채널(색상)으로 구성된 3차원 데이터이다.
    • 이 3차원 형상에는 소중한 공간적 정보가 담겨 있고, 3차원 속에서 의미를 갖는 본질적인 패턴이 숨어있을 수 있다.
      • (ex.1)공간적으로 가까운 픽셀은 값이 비슷할 수 있다.
      • (ex.2)RGB의 각 채널은 서로 밀접하게 관련되어 있을 수 있다.
      • (ex.3)거리가 먼 픽셀끼리는 별 연관이 없을 수 있다.
    • 완전연결 계층에 입력할 때는 3차원 데이터를 평평한 1차원 데이터로 평탄화해주어야 한다.
  • 합성곱 계층은 형상을 유지 한다.
    • 이미지도 3차원 데이터로 입력받으며, 다음 계층에도 3차원 데이터로 전달한다.
    • 그러므로 CNN에서는 이미지처럼 형상을 가진 데이터를 제대로 이해할 가능성이 있는 것이다.
  • CNN에서는 합성곱 계층의 입출력 데이터를 특징 맵(feature map) 이라고도 한다.
    • 합성곱 계층의 입력 데이터: 입력 특징 맵(input feature map)
    • 합성곱 계층의 출력 데이터: 출력 특징 맵(output feature map)

합성곱 연산

  • 합성곱 계층에서의 합성곱 연산은 이미지 처리에서 말하는 필터 연산 에 해당한다.
  • 합성곱 연산은 입력 데이터에 필터(혹은 커널) 를 적용한다.
  • 필터의 윈도우(window)를 일정 간격으로 이동해가며 입력 데이터에 적용한다.
  • 입력과 필터에서 대응하는 원소끼리 곱한 후 그 총합을 구한다.
    • 이를 단일 곱셉-누산(fused-multiply-add, FMA) 라고 한다.
  • 그 결과를 출력의 해당 장소에 저장한다.

  • 완전연결 신경망에는 가중치 매개변수와 편향이 존재하는데, CNN에서는 필터의 매개변수가 그동안의 ‘가중치’에 해당한다.
  • CNN에서도 편향 이 존재한다.

합성곱(2)

패딩(padding)

  • 합성곱 연산을 수행하기 전에 입력 데이터 주변을 특정 값(ex. 0)으로 채우기도 하는데, 이를 패딩 이라 한다.
  • 아래에서는 패딩을 1로 설정했지만, 2나 3 등 원하는 정수로 설정할 수 있다.
    • 만약 (4,4) 입력 데이터에 패딩을 2로 설정하면 입력 데이터 크기는 (8,8)이 된다.
  • 패딩은 주로 출력 크기를 조정할 목적 으로 사용한다.
    • 예를 들어, (4,4) 입력 데이터에 (3,3) 필터를 적용하면 출력은 (2,2)가 되어, 입력보다 2만큼 줄어든다.
    • 결국 합성곱 연산을 계속 거치면 어느 시점에서는 출력 크기가 1이 되어버리는 문제가 발생한다.
      • 패딩은 입력 데이터의 공간적 크기를 고정한 채로 다음 계층에 전달할 수 있다.

스트라이드(stride)

  • 필터를 적용하는 위치의 간격을 스트라이드 라고 한다.
  • 스트라이드를 2로 하면 필터를 적용하는 윈도우가 두 칸씩 이동한다.

  • 스트라이드를 키우면 출력 크기는 작아진다.
  • 패딩을 크게 하면 출력 크기는 커진다.
  • 위를 고려하면, 출력 크기는 아래 식으로 계산할 수 있다.
\[OH=\frac { { { H } }+{ { 2P } }-{ { FH } } }{ S }+1\] \[OW=\frac { { { W } }+{ {2P} }-{ { FW } } }{ S }+1\] \[입력 크기=(H,W)\] \[필터 크기=(FH,FW)\] \[출력 크기=(OH,OW)\] \[패딩=P\] \[스트라이드=S\]

3차원 데이터의 합성곱 연산

  • 이미지는 3차원 데이터이기 때문에, 3차원 데이터를 다루는 합성곱 연산이 필요하다.
  • 2차원일 때와 다른 점은 길이 방향(채널 방향)으로 특징 맵이 늘어났다는 점이다.
  • 입력 데이터와 필터의 합성곱 연산을 채널마다 수행하고, 그 결과를 더해서 하나의 출력을 얻는다.
  • 3차원 합성곱 연산에서 주의할 점은 입력 데이터의 채널 수필터의 채널 수 가 같아야 한다는 점이다.
    • 필터 자체의 크기는 원하는 값으로 설정할 수 있다.
    • 단, 모든 채널의 필터가 같은 크기여야 한다.

합성곱(3)

합성곱(4)

배치 처리

  • 각 계층을 데이터의 차원을 하나 늘려 4차원 데이터로 저장
    • (데이터 수, 채널 수, 높이, 너비)
  • 신경망에 4차원 데이터가 하나 흐를 때마다 데이터 N개에 대한 합성곱 연산이 이뤄진다.
    • 즉, N회분의 처리를 한 번에 수행한다.

합성곱(5)


풀링 계층

  • 풀링은 세로*가로 방향의 공간을 줄이는 연산이다.
  • 이미지 인식 분야에서는 주로 최대 풀링(max pooling)을 사용하는데, 평균 풀링(average pooling) 등이 있다.
    • 최대 풀링은 대상 영역에서 최댓값을 취하는 연산
    • 평균 풀링은 대상 영역에서 평균을 계산하는 연산
  • 풀링의 윈도우 크기와 스트라이드는 같은 값으로 설정하는 것이 보통이다.

합성곱(6)

풀링 계층의 특징

  • 학습해야할 매개변수가 없다.
    • 풀링은 대상 영역에서 최댓값이나 평균을 취하는 명확한 처리
  • 채널 수가 변하지 않는다.
    • 풀링 연산은 입력 데이터의 채널 수 그대로 출력 데이터로 내보낸다.
  • 입력의 변화에 영향을 적게 받는다(강건하다).
    • 입력 데이터가 조금 변해도 풀링의 결과는 잘 변하지 않는다.

합성곱/풀링 계층 구현하기

4차원 배열

# CNN에서 계층 사이를 흐르는 데이터는 4차원
# (10, 1, 28, 28) -> 높이 28, 너비 28, 채널 1개인 데이터가 10개
import numpy as np

x = np.random.rand(10, 1, 28, 28)
x.shape # (10, 1, 28, 28)
# 10개의 데이터 중, 첫 번째 데이터에 접근
x[0].shape # (1, 28, 28)
# 10개의 데이터 중, 두 번째 데이터에 접근
x[1].shape # (1, 28, 28)

im2col로 데이터 전개하기

  • im2col은 입력 데이터를 필터링(가중치 계산)하기 좋게 전개하는(펼치는) 함수
    • 입력 데이터에서 필터를 적용하는 영역(3차원 블록)을 한 줄로 늘어놓는다.
  • 필터를 세로로 1열로 전개하고, im2col이 전개한 데이터와 행렬 곱을 계산, 마지막으로 출력 데이터를 4차원으로 변형(reshape)한다.

합성곱(7)

합성곱 계층 구현하기

def im2col(input_data, filter_h, filter_w, stride=1, pad=0):
    """다수의 이미지를 입력받아 2차원 배열로 변환한다(평탄화).

    Parameters
    ----------
    input_data : 4차원 배열 형태의 입력 데이터(이미지 수, 채널 수, 높이, 너비)
    filter_h : 필터의 높이
    filter_w : 필터의 너비
    stride : 스트라이드
    pad : 패딩

    Returns
    -------
    col : 2차원 배열
    """
    N, C, H, W = input_data.shape
    out_h = (H + 2*pad - filter_h)//stride + 1
    out_w = (W + 2*pad - filter_w)//stride + 1

    img = np.pad(input_data, [(0,0), (0,0), (pad, pad), (pad, pad)], 'constant')
    col = np.zeros((N, C, filter_h, filter_w, out_h, out_w))

    for y in range(filter_h):
        y_max = y + stride*out_h
        for x in range(filter_w):
            x_max = x + stride*out_w
            col[:, :, y, x, :, :] = img[:, :, y:y_max:stride, x:x_max:stride]

    col = col.transpose(0, 4, 5, 1, 2, 3).reshape(N*out_h*out_w, -1)
    return col
x1 = np.random.rand(1, 3, 7, 7) # 데이터 수, 채널 수, 높이, 너비
col1 = im2col(x1, 5, 5, stride=1, pad=0)
print(col1.shape) # (9, 75) 75=(채널 3개, 5*5 데이터)
x2 = np.random.rand(10, 3, 7, 7)
col2 = im2col(x2, 5, 5, stride=1, pad=0)
print(col2.shape) # (90, 75)
class Convolution:
    def __init__(self, W, b, stride=1, pad=0):
        self.W = W
        self.b = b
        self.stride = stride
        self.pad = pad

    def forward(self, x):
        FN, C, FH, FW = self.W.shape # FN: 필터의 개수, C: 채널, FH: 필터 높이, FW: 필터 너비
        N, C, H, W = x.shape
        out_h = int(1 + (H + 2 * self.pad - FH) / self.stride)
        out_w = int(1 + (W + 2 * self.pad - FW) / self.stride)

        col = im2col(x, FH, FW, self.stride, self.pad)
        col_W = self.W.reshape(FN, -1).T # 필터 전개
        out = np.dot(col, col_W) + self.b

        out = out.reshape(N, out_h, out_w, -1).transpose(0, 3, 1, 2)
        return out

풀링 계층 구현하기

  • 풀링 계층 구현에서도 합성곱 계층과 마찬가지로 Im2col을 사용해 입력 데이터를 전개한다.
  • 단, 풀링의 경우엔 채널 쪽이 독립적이라는 점이 합성곱 계층 때와 다르다.
class Pooling:
    def __init__(self, pool_h, pool_w, stride=1, pad=0):
        self.pool_h = pool_h
        self.pool_w = pool_w
        self.stride = stride
        self.pad = pad

    def forward(self, x):
        N, C, H, W = x.shape
        out_h = int(1 + (H - self.pool_h) / self.stride)
        out_w = int(1 + (W - self.pool_w) / self.stride)

        # 입력 데이터 전개
        col = im2col(x, self.pool_h, self.pool_w, self.stride, self.pad)
        col = col.reshape(-1, self.pool_h*self.pool_w)

        # 최댓값
        out = np.max(col, axis=1)

        # reshape
        out = out.reshape(N, out_h, out_w, C).transpose(0, 3, 1, 2)

        return out

CNN 구현하기

  • Conv -> ReLU -> Pooling -> Affine -> ReLU -> Affine -> Softmax

활용되는 함수들

class SimpleConvNet:
    def __init__(self, input_dim=(1, 28, 28), conv_param={'filter_num':30, 'filter_size':5, 'pad':0, 'stride':1},
                 hidden_size=100, output_size=10, weight_init_std=0.01):
        ### 합성곱 계층의 하이퍼파라미터 ###
        filter_num = conv_param['filter_num']
        filter_size = conv_param['filter_size']
        filter_pad = conv_param['pad']
        filter_stride = conv_param['stride']

        ### 합성곱 계층의 출력 크기 계산 ###
        input_size = input_dim[1]
        conv_output_size = (input_size - filter_size + 2*filter_pad) / \
                            filter_stride + 1
        pool_output_size = int(filter_num * (conv_output_size/2) * (conv_output_size/2))

        ### 학습에 필요한 매개변수 초기화###
        self.params = {}
        # 1번째 층의 합성곱 계층의 가중치, 편향
        self.params['W1'] = weight_init_std * \
                            np.random.randn(filter_num, input_dim[0], filter_size, filter_size)
        self.params['b1'] = np.zeros(filter_num)
        # 2번째 층의 완전연결 계층의 가중치, 편향
        self.params['W2'] = weight_init_std * \
                            np.random.randn(pool_output_size, hidden_size)
        self.params['b2'] = np.zeros(hidden_size)
        # 3번째 층의 완전연결 계층의 가중치, 편향
        self.params['W3'] = weight_init_std * \
                            np.random.randn(hidden_size, output_size)
        self.params['b3'] = np.zeros(output_size)

        ### CNN 구성 계층 ###
        self.layers = OrderedDict()
        self.layers['Conv1'] = Convolution(self.params['W1'],
                                           self.params['b1'],
                                           self.params['stride'],
                                           self.params['pad'])
        self.layers['Relu1'] = Relu()
        self.layers['Pool1'] = Pooling(pool_h=2, pool_w=2, stride=2)
        self.layers['Affine1'] = Affine(self.params['W2'], self.params['b2'])
        self.layers['Relu2'] = Relu()
        self.layers['Affine2'] = Affine(self.params['W3'], self.params['b3'])
        self.last_layer = SoftmaxWithLoss()
def predict(self, x):
    for layer in self.layers.values():
        x = layer.forward()
    return x

def loss(self, x, t):
    y = self.predict(x)
    return self.last_layer.forward(y, t)
### 오차역전파법으로 기울기를 구하는 구현 ###
def gradient(self, x, t):
    # 순전파
    self.loss(x, t)

    # 역전파
    dout = 1
    dout = self.last_layer.backward(dout)

    layers = list(self.layers.values())
    layeres.reverse()
    for layer in layers:
        dout = layer.backward(dout)

    # 결과 저장
    grads = {}
    grads['W1'] = self.layers['Conv1'].dW
    grads['b1'] = self.layers['Conv1'].db
    grads['W2'] = self.layers['Affine1'].dW
    grads['b1'] = self.layers['Affine1'].db
    grads['W3'] = self.layers['Affine2'].dW
    grads['b3'] = self.layers['Affine2'].db

    return grads

실행

import sys
sys.path.append('/Users/yaelinjo/GitHub/deep-learning-from-scratch/master')
sys.path.append('/Users/yaelinjo/GitHub/deep-learning-from-scratch/master/ch07')
import numpy as np
import matplotlib.pyplot as plt
from dataset.mnist import load_mnist
from simple_convnet import SimpleConvNet
from common.trainer import Trainer

# 데이터 읽기
# (x_train, t_train), (x_test, t_test) = load_mnist(flatten=False)

# 시간이 오래 걸릴 경우 데이터를 줄인다.
x_train, t_train = x_train[:5000], t_train[:5000]
x_test, t_test = x_test[:1000], t_test[:1000]

max_epochs = 20

network = SimpleConvNet(input_dim=(1,28,28),
                        conv_param = {'filter_num': 30, 'filter_size': 5, 'pad': 0, 'stride': 1},
                        hidden_size=100, output_size=10, weight_init_std=0.01)

trainer = Trainer(network, x_train, t_train, x_test, t_test,
                  epochs=max_epochs, mini_batch_size=100,
                  optimizer='Adam', optimizer_param={'lr': 0.001},
                  evaluate_sample_num_per_epoch=1000)
trainer.train()

# 매개변수 보존
network.save_params("params.pkl")
print("Saved Network Parameters!")

# 그래프 그리기
markers = {'train': 'o', 'test': 's'}
x = np.arange(max_epochs)
plt.plot(x, trainer.train_acc_list, marker='o', label='train', markevery=2)
plt.plot(x, trainer.test_acc_list, marker='s', label='test', markevery=2)
plt.xlabel("epochs")
plt.ylabel("accuracy")
plt.ylim(0, 1.0)
plt.legend(loc='lower right')
plt.show()
train loss:2.2991227148691604
=== epoch:1, train acc:0.251, test acc:0.19 ===
train loss:2.295978863743638
train loss:2.2909333647753773
train loss:2.2850178576707187
train loss:2.273978072002693
train loss:2.263563663218034
train loss:2.252385638532998
train loss:2.2146997789633476
train loss:2.1887854346848243
train loss:2.188009854814559
train loss:2.1374747093263435
train loss:2.12829172436234
train loss:2.1028597400370037
train loss:2.0404510841243
train loss:2.027676199058368
train loss:1.8816157248736087
train loss:1.8826900104680808
train loss:1.8357331936792889
train loss:1.7081387620883823
train loss:1.6594915797241274
train loss:1.5315984670066514
train loss:1.4298383848442766
train loss:1.3486091338163555
train loss:1.2827285205410572
train loss:1.1871787629657207
train loss:1.1039865446888684
train loss:1.1722269398419747
train loss:0.9324197428829848
train loss:0.9581266884275693
train loss:0.9661460619185256
train loss:0.872815675207734
train loss:1.043440736079865
train loss:0.7907991111767249
train loss:0.8291472611508582
train loss:0.8245381632348646
train loss:0.7541320555132976
train loss:0.7564438727680382
train loss:0.5828530727677292
train loss:0.7580067605498951
train loss:0.568782609042793
train loss:0.5638703568787636
train loss:0.6989980425085178
train loss:0.6643372590768635
train loss:0.5054227717941542
train loss:0.6768260640455519
train loss:0.5698293998682854
train loss:0.5344436601037152
train loss:0.5124387636024179
train loss:0.6201885146604535
train loss:0.54893938608764
train loss:0.6717095346346522
=== epoch:2, train acc:0.811, test acc:0.803 ===
train loss:0.43829552169864383
train loss:0.6620280121887219
train loss:0.5271819036932541
train loss:0.619758484307373
train loss:0.6516415468041423
train loss:0.4979086784229407
train loss:0.5062753974122086
train loss:0.5628715860169072
train loss:0.6220639046093465
train loss:0.39751695321857805
train loss:0.4702647545298986
train loss:0.4902883831337625
train loss:0.46530840321926653
train loss:0.4609672664572689
train loss:0.34487561298045644
train loss:0.467080489253773
train loss:0.49480578494622046
train loss:0.30177143159817815
train loss:0.4152275636257769
train loss:0.4516446960678784
train loss:0.3580255220748309
train loss:0.5715751155319138
train loss:0.4178494331117112
train loss:0.25066251319734845
train loss:0.5573619901995726
train loss:0.3638565560365097
train loss:0.3948258945335511
train loss:0.44951401286623444
train loss:0.3639325465497874
train loss:0.5446239342154318
train loss:0.4180276470712141
train loss:0.6593977572533132
train loss:0.26705501163299233
train loss:0.5437207323406757
train loss:0.242163261528792
train loss:0.42988845355476824
train loss:0.3277748706015516
train loss:0.4455000343552672
train loss:0.42411356764833885
train loss:0.30283021793492076
train loss:0.39811278013988016
train loss:0.3675724122629705
train loss:0.3943301776574738
train loss:0.34357673644237763
train loss:0.267931787005017
train loss:0.463570444718947
train loss:0.3700031625206856
train loss:0.460765974645032
train loss:0.34937670963388584
train loss:0.516666906466281
=== epoch:3, train acc:0.87, test acc:0.867 ===
train loss:0.43909728914058027
train loss:0.3229443812415014
train loss:0.2966748569415366
train loss:0.27167760154326737
train loss:0.36680717635733556
train loss:0.4074809489841625
train loss:0.29461384969505067
train loss:0.5501369340127067
train loss:0.32813726926590875
train loss:0.41392227622592953
train loss:0.34177700963861574
train loss:0.28850079288171393
train loss:0.3960690025303987
train loss:0.3722148472341974
train loss:0.42602429868401687
train loss:0.5406206751988678
train loss:0.3855752533259246
train loss:0.33112525019270456
train loss:0.2706660640016907
train loss:0.2901984425805616
train loss:0.38380758637760226
train loss:0.20733855050697436
train loss:0.3323885603643313
train loss:0.2460512189206431
train loss:0.452160040207179
train loss:0.3356314269382852
train loss:0.1529454125859996
train loss:0.32147257474283736
train loss:0.30339372196610276
train loss:0.2490803147856962
train loss:0.4880691603997432
train loss:0.4463219309130484
train loss:0.2591101873921125
train loss:0.41802428690687427
train loss:0.2934713855859977
train loss:0.2803106684190343
train loss:0.41679657776104767
train loss:0.23306969054489848
train loss:0.2007429865347839
train loss:0.3924300695018694
train loss:0.2584495815120578
train loss:0.24776157850782846
train loss:0.25104958013727313
train loss:0.2827779591421378
train loss:0.33788532348521483
train loss:0.32104595436377603
train loss:0.23895034907833057
train loss:0.33114393676690873
train loss:0.36805356629757613
train loss:0.23162182777948462
=== epoch:4, train acc:0.896, test acc:0.875 ===
train loss:0.248139290862287
train loss:0.19009520372338362
train loss:0.3786689718412324
train loss:0.25233342116067425
train loss:0.2546214801360098
train loss:0.2524449099986612
train loss:0.269983154086866
train loss:0.27547605480239384
train loss:0.29724644701585007
train loss:0.15253730317415778
train loss:0.145865311789338
train loss:0.1790848810925964
train loss:0.15211724804006835
train loss:0.2771750894879088
train loss:0.24729960330660003
train loss:0.31919968905311047
train loss:0.10738530631729944
train loss:0.2993132224712861
train loss:0.17397401220211733
train loss:0.19884390308352706
train loss:0.3399795163698808
train loss:0.36873807565305156
train loss:0.5087928515056828
train loss:0.32155441853466377
train loss:0.1952143312216369
train loss:0.2972649045696601
train loss:0.17472650432347286
train loss:0.25455874337654855
train loss:0.2932998705160887
train loss:0.45226081334226337
train loss:0.3113544134255771
train loss:0.27612026876314755
train loss:0.5483652821305667
train loss:0.24873417955314958
train loss:0.30977661266223433
train loss:0.20989339932025164
train loss:0.25656861012567694
train loss:0.18552578171496367
train loss:0.3969935939133063
train loss:0.18382465873696652
train loss:0.21850160603263924
train loss:0.16880461055003265
train loss:0.2076353786274373
train loss:0.27698611965182196
train loss:0.2924418481684194
train loss:0.2762639369633914
train loss:0.2788018490428826
train loss:0.3641994996036875
train loss:0.2056302884532687
train loss:0.25867027307007034
=== epoch:5, train acc:0.897, test acc:0.888 ===
train loss:0.17050047342024605
train loss:0.2147401152986891
train loss:0.34621687976485654
train loss:0.25875161215314463
train loss:0.2200028945723738
train loss:0.23909440305791219
train loss:0.38226387219393826
train loss:0.2996089940112383
train loss:0.16784931076134493
train loss:0.2145711721197434
train loss:0.18078693940461407
train loss:0.149150563934765
train loss:0.3805062993493845
train loss:0.28221072863335595
train loss:0.26458405729017714
train loss:0.12013642329716441
train loss:0.25181816859313494
train loss:0.1371721456224657
train loss:0.36434145813481955
train loss:0.272464247773835
train loss:0.15842112058487584
train loss:0.43604362489579207
train loss:0.16226177806575678
train loss:0.2545441645445196
train loss:0.279873340684138
train loss:0.2513343640208687
train loss:0.22829983709246376
train loss:0.32203102450448867
train loss:0.2644873328209496
train loss:0.21269655882183203
train loss:0.3320995738597924
train loss:0.20920050729891138
train loss:0.19232536260762625
train loss:0.1654749192695348
train loss:0.3136980097932126
train loss:0.26293751321631065
train loss:0.22439020624000605
train loss:0.24471814683745965
train loss:0.21281002233714172
train loss:0.22942814396276254
train loss:0.23967404693588212
train loss:0.20536308332928102
train loss:0.32066919647734515
train loss:0.26831252903430036
train loss:0.21442875368424125
train loss:0.2230966022467403
train loss:0.20463996727005487
train loss:0.34930138018102647
train loss:0.27612510803302637
train loss:0.22984193691373989
=== epoch:6, train acc:0.9, test acc:0.906 ===
train loss:0.1793781432837607
train loss:0.2619090312734717
train loss:0.18021384468181154
train loss:0.3024246194425354
train loss:0.2752453578997997
train loss:0.17436293627231644
train loss:0.2711729348740426
train loss:0.24317620173795063
train loss:0.08273419914057889
train loss:0.17842445898792572
train loss:0.21196327150374622
train loss:0.23127799127766172
train loss:0.13805685841884954
train loss:0.19110658491218216
train loss:0.2535109981602111
train loss:0.17663035197657184
train loss:0.22596452469807513
train loss:0.23173520937651498
train loss:0.11025053095407002
train loss:0.18449963340632938
train loss:0.12880316472415285
train loss:0.2646864851279066
train loss:0.21244550290303735
train loss:0.153511857580522
train loss:0.17951455610231307
train loss:0.29590845344571887
train loss:0.2218491385457
train loss:0.19328795376492125
train loss:0.35057493996974154
train loss:0.2057336978712336
train loss:0.2321279517562551
train loss:0.23151170154136108
train loss:0.17492565118792947
train loss:0.3162629900440282
train loss:0.2000334587023345
train loss:0.2753646339389848
train loss:0.31042293728975945
train loss:0.3667276719047552
train loss:0.2239964420769437
train loss:0.11600634739545798
train loss:0.19972551409243397
train loss:0.23245269458591733
train loss:0.2372515796473875
train loss:0.21071885646805819
train loss:0.1402998508614202
train loss:0.12082995424253774
train loss:0.23918795523246017
train loss:0.1563864243422175
train loss:0.14612045175482463
train loss:0.18467833669774375
=== epoch:7, train acc:0.911, test acc:0.907 ===
train loss:0.11168432781702674
train loss:0.2766716538570146
train loss:0.307304267147958
train loss:0.2804415913750455
train loss:0.07967357859331922
train loss:0.10024805493083898
train loss:0.2867164307330697
train loss:0.15138615688057366
train loss:0.11284657269968382
train loss:0.2020722622344911
train loss:0.19508531673588445
train loss:0.15632078463411778
train loss:0.24976286955581628
train loss:0.2972028644501457
train loss:0.4311753354817112
train loss:0.2561955313100634
train loss:0.3004621653011776
train loss:0.128408014904779
train loss:0.19131431016194425
train loss:0.2771478487185202
train loss:0.2546006611971546
train loss:0.13869310478162758
train loss:0.20809758668313946
train loss:0.23557693296479232
train loss:0.08638064128877251
train loss:0.3287941841340329
train loss:0.165008754789724
train loss:0.20955112859935818
train loss:0.17157639734453226
train loss:0.10094531606699261
train loss:0.19789792601584025
train loss:0.322059177776599
train loss:0.08170064855496299
train loss:0.22104965524840275
train loss:0.2572822367158776
train loss:0.13212560921637972
train loss:0.2002447470625281
train loss:0.20034255324362968
train loss:0.21431133355468154
train loss:0.24751742852016323
train loss:0.13540613273316357
train loss:0.26516804689187734
train loss:0.148294472593518
train loss:0.16614760337704776
train loss:0.1691543665045142
train loss:0.06987366498320863
train loss:0.1958672767048412
train loss:0.13209718655725855
train loss:0.20261930336732953
train loss:0.2594815396774551
=== epoch:8, train acc:0.93, test acc:0.9 ===
train loss:0.1708099103148344
train loss:0.163068675409231
train loss:0.21951439459555538
train loss:0.09653659786839147
train loss:0.12744323583907952
train loss:0.13295456718729548
train loss:0.20016582895506257
train loss:0.2411642237348897
train loss:0.21051975694558184
train loss:0.23118140883807933
train loss:0.11946497851484644
train loss:0.3313340261593551
train loss:0.21263287245317158
train loss:0.14813278224538723
train loss:0.20278137015761977
train loss:0.209471527946272
train loss:0.09397563579788287
train loss:0.17718766463466795
train loss:0.15699637106901265
train loss:0.19644872672161945
train loss:0.21006443602974767
train loss:0.20019160405657854
train loss:0.1797535756979457
train loss:0.22688894103160795
train loss:0.15060033415145202
train loss:0.2010485957643241
train loss:0.21557759187075015
train loss:0.1943279067644421
train loss:0.08721122250752553
train loss:0.17108715578502956
train loss:0.24916590231025634
train loss:0.20656697373278143
train loss:0.10137268821428719
train loss:0.08172747890223712
train loss:0.10079080786683615
train loss:0.2047350583929373
train loss:0.11689329787495838
train loss:0.09061600679257534
train loss:0.1098355619869032
train loss:0.07840870324784599
train loss:0.24235184556472383
train loss:0.12451563375142684
train loss:0.2240596230413454
train loss:0.1953125268565047
train loss:0.21211434575436605
train loss:0.18890592439556278
train loss:0.1615129612649173
train loss:0.11575329713599208
train loss:0.16465180651521116
train loss:0.07468549661629607
=== epoch:9, train acc:0.946, test acc:0.925 ===
train loss:0.24798017568528677
train loss:0.05753486985284373
train loss:0.15379904000288924
train loss:0.13150237783819144
train loss:0.1872656133438617
train loss:0.07836840798509477
train loss:0.1418006101360903
train loss:0.12055534396589461
train loss:0.0845373202866552
train loss:0.2878989304320108
train loss:0.17540924617002635
train loss:0.12625346956128217
train loss:0.15783646089528905
train loss:0.16491639699999885
train loss:0.11484994628682484
train loss:0.14097460417007468
train loss:0.0618081845070039
train loss:0.13514191202960024
train loss:0.15879568678522588
train loss:0.1328159831906322
train loss:0.20780333055037425
train loss:0.1961125762638906
train loss:0.10350070531841224
train loss:0.1235509488288809
train loss:0.07835256102664845
train loss:0.22070657035217434
train loss:0.21243267779588876
train loss:0.19618207740465393
train loss:0.12589532889681948
train loss:0.11913784293132804
train loss:0.08387553639205157
train loss:0.25044398928429884
train loss:0.15220982957902607
train loss:0.13465627720236933
train loss:0.23670197770110493
train loss:0.15134451513856415
train loss:0.07668825161395325
train loss:0.11981537202674575
train loss:0.16336152857307218
train loss:0.176912064804705
train loss:0.12988130134549267
train loss:0.19609692349452676
train loss:0.2274930230531563
train loss:0.17124196772864636
train loss:0.347845546181614
train loss:0.1296208016699566
train loss:0.08714757823683832
train loss:0.13280917155596084
train loss:0.22957403430249942
train loss:0.18751307429158037
=== epoch:10, train acc:0.952, test acc:0.925 ===
train loss:0.15662104366389137
train loss:0.1296228403635444
train loss:0.18340005094304196
train loss:0.09755866823187248
train loss:0.08229724098663796
train loss:0.1081700953745687
train loss:0.12610377149235627
train loss:0.09478147803522521
train loss:0.16143146125887792
train loss:0.1859688223770123
train loss:0.0743181662814355
train loss:0.09116540843164284
train loss:0.07955975952865467
train loss:0.21534310820392563
train loss:0.06330038686977113
train loss:0.1938556989035864
train loss:0.13110110750914938
train loss:0.20277113393865695
train loss:0.10965295268030321
train loss:0.06930487978527272
train loss:0.09773244762561044
train loss:0.1379755333837529
train loss:0.12921079591386658
train loss:0.10532988841714822
train loss:0.13729075681132352
train loss:0.07119117135966935
train loss:0.16518738693970267
train loss:0.16547355015343218
train loss:0.10872847328187404
train loss:0.1672958225209823
train loss:0.0901615053395578
train loss:0.08196909744991938
train loss:0.1154974453918593
train loss:0.0870843861408151
train loss:0.10556218591585256
train loss:0.23811336118926044
train loss:0.09110895116089278
train loss:0.09016239622281874
train loss:0.03950660065924369
train loss:0.10897565014466455
train loss:0.2397249067853726
train loss:0.1977735276142541
train loss:0.12135421632782979
train loss:0.1383150595980184
train loss:0.10520676475035445
train loss:0.12507505798919982
train loss:0.15357866386950036
train loss:0.13340137459290952
train loss:0.14213050479497047
train loss:0.07644753663560148
=== epoch:11, train acc:0.959, test acc:0.931 ===
train loss:0.14120057029588476
train loss:0.25666309631138323
train loss:0.12425937809732765
train loss:0.09381893971711922
train loss:0.11099526661845313
train loss:0.1851661791650909
train loss:0.12682550387050592
train loss:0.11350291213882999
train loss:0.13479082095295128
train loss:0.12278356067617051
train loss:0.1376674581268274
train loss:0.09779416245515148
train loss:0.032586287299776834
train loss:0.09534260592815887
train loss:0.15055075721999056
train loss:0.14186125778678652
train loss:0.0711271990449936
train loss:0.06688998937448694
train loss:0.1834520262730984
train loss:0.07465038042082567
train loss:0.05790395280999402
train loss:0.15621460657034308
train loss:0.14539930608366464
train loss:0.11415420103756592
train loss:0.10934700198174596
train loss:0.11466419951192108
train loss:0.17758565086791275
train loss:0.2307831108967929
train loss:0.08271034575052702
train loss:0.10477065716819857
train loss:0.09620541413803087
train loss:0.08449992728589978
train loss:0.0937526950329816
train loss:0.09729636121458166
train loss:0.07278112686427725
train loss:0.09992942464729421
train loss:0.07653649282398356
train loss:0.1025452571776432
train loss:0.18067229409847607
train loss:0.08441770689020345
train loss:0.08265658916683874
train loss:0.12596236193193447
train loss:0.09552014707271793
train loss:0.1249618639130877
train loss:0.10664686072001876
train loss:0.10293799169770088
train loss:0.13804692701618004
train loss:0.0683250444982838
train loss:0.1562599877562509
train loss:0.08325726416120145
=== epoch:12, train acc:0.964, test acc:0.926 ===
train loss:0.045307784539875565
train loss:0.16127671598120952
train loss:0.10641741642144308
train loss:0.16961920964254118
train loss:0.15841155047813427
train loss:0.17242137888243037
train loss:0.041322171107213104
train loss:0.12596753184808704
train loss:0.09133627983053719
train loss:0.15233110677195222
train loss:0.08918794701187785
train loss:0.08007564432070811
train loss:0.10818009675234362
train loss:0.07503056930991961
train loss:0.0880973855551213
train loss:0.15331044085647452
train loss:0.0674159265971526
train loss:0.15836736872461354
train loss:0.12331544225851523
train loss:0.11978470534988693
train loss:0.11190568121951668
train loss:0.09071274661655909
train loss:0.03649136207161312
train loss:0.044530585603891025
train loss:0.13569608498255273
train loss:0.10366002626511348
train loss:0.09120567734693631
train loss:0.04517532032626513
train loss:0.04797059615059915
train loss:0.041607847914986756
train loss:0.1221250936506568
train loss:0.06515817903897028
train loss:0.1861979135088953
train loss:0.11076944196056714
train loss:0.045263768821254186
train loss:0.1199455719347222
train loss:0.08901879572428424
train loss:0.12177811841037514
train loss:0.0792235619777293
train loss:0.08313358494392782
train loss:0.10417257187348142
train loss:0.06705533130311461
train loss:0.045095775256404035
train loss:0.027321452366151217
train loss:0.05975870982019094
train loss:0.09836811847686486
train loss:0.05076008995445964
train loss:0.0423717117418494
train loss:0.047793691843794193
train loss:0.0740165953444585
=== epoch:13, train acc:0.965, test acc:0.93 ===
train loss:0.10834686332842505
train loss:0.072344079054016
train loss:0.08423597264096828
train loss:0.027902500989439995
train loss:0.19446006538991598
train loss:0.14728009135852965
train loss:0.06723135975633206
train loss:0.020197311707120028
train loss:0.059523267562095646
train loss:0.10376162231228228
train loss:0.05187252302747364
train loss:0.10022529426708612
train loss:0.21487506638992634
train loss:0.09268593837647328
train loss:0.038640415235107105
train loss:0.06529469863077461
train loss:0.1284996943124513
train loss:0.05613416370501622
train loss:0.042920412042428156
train loss:0.10844985742424136
train loss:0.17228447032425429
train loss:0.08456528845557033
train loss:0.10447273587865658
train loss:0.047654312084010414
train loss:0.07407279275622015
train loss:0.10431498228278477
train loss:0.0526148600197104
train loss:0.08060625517297908
train loss:0.14545828129384283
train loss:0.03176261669709954
train loss:0.07348081505080506
train loss:0.04817503028453311
train loss:0.11303337741899017
train loss:0.05255344613236976
train loss:0.09615897623567797
train loss:0.14076875933469457
train loss:0.1250113223894949
train loss:0.06041912200513991
train loss:0.0622091128115872
train loss:0.0615238377909869
train loss:0.19047055393166812
train loss:0.1626256756292815
train loss:0.09204545916939227
train loss:0.05982240104592077
train loss:0.05869468908995454
train loss:0.07153251344953493
train loss:0.05844559383043004
train loss:0.0710580458919408
train loss:0.10777782857908835
train loss:0.04051558460487357
=== epoch:14, train acc:0.973, test acc:0.939 ===
train loss:0.09622947847789451
train loss:0.0922125071008779
train loss:0.08862979005024212
train loss:0.03232532705938978
train loss:0.05905625809812453
train loss:0.07496438440809797
train loss:0.08216991763754473
train loss:0.027253863696751434
train loss:0.055905310456753864
train loss:0.06647842078606427
train loss:0.09599489504902584
train loss:0.0491459296535508
train loss:0.11805276649034444
train loss:0.13790962685863903
train loss:0.06803890039569581
train loss:0.06523300584947607
train loss:0.03631677196724742
train loss:0.06665604780003975
train loss:0.12069120297638014
train loss:0.1405465222050205
train loss:0.11527313535197878
train loss:0.0351832856738768
train loss:0.02708158094029878
train loss:0.07669641688577787
train loss:0.0932260417332716
train loss:0.07973122679801951
train loss:0.06037955489021435
train loss:0.05783021512860995
train loss:0.04065938308808021
train loss:0.08500401973970191
train loss:0.07446117104417048
train loss:0.04783332955911389
train loss:0.0568204590378485
train loss:0.05954843735201716
train loss:0.03184622769998489
train loss:0.041081136021263356
train loss:0.017007691240708743
train loss:0.06054396106923339
train loss:0.10662940280391965
train loss:0.03125925370045038
train loss:0.05002455355177109
train loss:0.14279648508319961
train loss:0.05871790908376049
train loss:0.06906557130961971
train loss:0.059163227580664286
train loss:0.06429412976119746
train loss:0.06458793999708484
train loss:0.09747006708628932
train loss:0.1270129859753214
train loss:0.026429844398768387
=== epoch:15, train acc:0.98, test acc:0.944 ===
train loss:0.05841612731116043
train loss:0.09270524733791914
train loss:0.07038965228802746
train loss:0.10174064655852812
train loss:0.04343603883213123
train loss:0.09590508989523061
train loss:0.12604723891531555
train loss:0.047300151920698016
train loss:0.022370587017568756
train loss:0.029880158961923312
train loss:0.11622722376836694
train loss:0.04184778938752236
train loss:0.02890945316024264
train loss:0.05735170034251223
train loss:0.07385168596953492
train loss:0.04878758587107904
train loss:0.10882185566896413
train loss:0.0500679903798485
train loss:0.12452659582832913
train loss:0.06392496534307286
train loss:0.05971982431741615
train loss:0.032723776663160044
train loss:0.02135317505330928
train loss:0.06322634823856278
train loss:0.06311488608730655
train loss:0.10347797939850324
train loss:0.036715263027527346
train loss:0.04562625842988379
train loss:0.10531894892994048
train loss:0.03314369630283114
train loss:0.05415728381336875
train loss:0.0804215891242932
train loss:0.09444269795185649
train loss:0.03399361543111238
train loss:0.05452990211329644
train loss:0.07265924434560458
train loss:0.023860877765723158
train loss:0.05005981938084356
train loss:0.05006084405019291
train loss:0.07327958059555426
train loss:0.08310667250219796
train loss:0.05215407698449852
train loss:0.08358735942525422
train loss:0.056165402851615226
train loss:0.017228812562853447
train loss:0.04647205464558826
train loss:0.06640149651622122
train loss:0.046592356763898585
train loss:0.014619804679343675
train loss:0.07640017483641671
=== epoch:16, train acc:0.977, test acc:0.941 ===
train loss:0.03673630297755376
train loss:0.03607700723865509
train loss:0.02976737160989859
train loss:0.031994619054217015
train loss:0.08238169514182415
train loss:0.10365656296350607
train loss:0.04982712252717902
train loss:0.0730001514566271
train loss:0.052096796977181906
train loss:0.03062800814300871
train loss:0.05188094797102207
train loss:0.02110762050198426
train loss:0.07475810299145008
train loss:0.03108267968256671
train loss:0.023787780462044134
train loss:0.016536840294620377
train loss:0.07415347604110666
train loss:0.03724142283133464
train loss:0.05969491397957163
train loss:0.04881773712475082
train loss:0.0689391542863737
train loss:0.032144525544098766
train loss:0.032767125092651755
train loss:0.04431924383136528
train loss:0.039346632500867874
train loss:0.050027682442577
train loss:0.04864147666562808
train loss:0.03758150135487124
train loss:0.051354194654825006
train loss:0.08929283167050768
train loss:0.03323838826379886
train loss:0.12020297890024238
train loss:0.09529711283086258
train loss:0.05552187732732769
train loss:0.1049077117978002
train loss:0.0631138752835849
train loss:0.0460824070498482
train loss:0.05406414274379008
train loss:0.039370536567093085
train loss:0.029857170555206633
train loss:0.04505055306469376
train loss:0.06746995539816897
train loss:0.026725934579725844
train loss:0.07561647106599838
train loss:0.053439233534181445
train loss:0.09105554783664228
train loss:0.029246551580138513
train loss:0.03357524266367118
train loss:0.032286093179900714
train loss:0.03811304195921997
=== epoch:17, train acc:0.974, test acc:0.943 ===
train loss:0.14604872932275964
train loss:0.0339044120383985
train loss:0.04201784447123517
train loss:0.07716003124630338
train loss:0.06508128476693452
train loss:0.03209447390216631
train loss:0.036570485518008754
train loss:0.0410815764097413
train loss:0.04908511908686436
train loss:0.04706831428301015
train loss:0.05040458684717512
train loss:0.025174803088734637
train loss:0.057562802311350386
train loss:0.031054383482061913
train loss:0.09613128436224706
train loss:0.06537141729504861
train loss:0.04737071863180697
train loss:0.02701290496192208
train loss:0.022512992097383774
train loss:0.015063674338158323
train loss:0.06603468607700946
train loss:0.024486827821728042
train loss:0.05466245028542747
train loss:0.06511522617186287
train loss:0.027222000948030716
train loss:0.038634967926773185
train loss:0.038677500133109506
train loss:0.0439481650882708
train loss:0.03731439241126596
train loss:0.02284073377631218
train loss:0.013945411729656014
train loss:0.027252869712844983
train loss:0.04303985027622485
train loss:0.050004880174469316
train loss:0.023905641218747976
train loss:0.03015135555779142
train loss:0.01803192371482411
train loss:0.04109126428182044
train loss:0.0991725037375045
train loss:0.03279222348402171
train loss:0.04526448471606404
train loss:0.01599622676049273
train loss:0.10228378829873337
train loss:0.0544026667987257
train loss:0.0586360185369616
train loss:0.035900902394344
train loss:0.038864158505777666
train loss:0.018611037734297252
train loss:0.05011426728837811
train loss:0.034606279000200545
=== epoch:18, train acc:0.981, test acc:0.949 ===
train loss:0.03494104610151096
train loss:0.05422380362978416
train loss:0.05413282998062021
train loss:0.03785476292876102
train loss:0.03250112916654417
train loss:0.06045663938634326
train loss:0.03606170415172565
train loss:0.033695987412412545
train loss:0.08193272741035655
train loss:0.07564425889840941
train loss:0.05284541987145369
train loss:0.03714072185993006
train loss:0.044884906900520466
train loss:0.07647910120702418
train loss:0.03559267629903593
train loss:0.06396310367862759
train loss:0.05032869534491041
train loss:0.031057791726831905
train loss:0.03102541675894999
train loss:0.08789309328095708
train loss:0.05933817483762767
train loss:0.10171337967444784
train loss:0.051945640741920256
train loss:0.034072929923978174
train loss:0.041889159866023505
train loss:0.04901687981426811
train loss:0.044258889157000686
train loss:0.033361876440453846
train loss:0.09904741058473161
train loss:0.0530316591101283
train loss:0.043688601082365094
train loss:0.09304703798630895
train loss:0.03884143956480964
train loss:0.04322424103735324
train loss:0.01787672305698919
train loss:0.050668536353111485
train loss:0.024115274411226947
train loss:0.058643806822298666
train loss:0.07180726922903025
train loss:0.02921994787655625
train loss:0.014587765445140146
train loss:0.04133321062096907
train loss:0.03788978413029533
train loss:0.0701091285024017
train loss:0.08874623094840701
train loss:0.021729055296676097
train loss:0.028013539572673656
train loss:0.024503351263701307
train loss:0.05689494731378726
train loss:0.055287825892137564
=== epoch:19, train acc:0.975, test acc:0.939 ===
train loss:0.04985385883435625
train loss:0.03255221025660839
train loss:0.06845049288928803
train loss:0.04341669703310228
train loss:0.02410032443740281
train loss:0.023724994898037925
train loss:0.04820094325153587
train loss:0.07084749337718982
train loss:0.05590240147907427
train loss:0.04408933193782584
train loss:0.052510644596212186
train loss:0.060204022887661995
train loss:0.1059918150927453
train loss:0.0609235874306119
train loss:0.03420353628011823
train loss:0.05779961018571897
train loss:0.04258859634882456
train loss:0.03525464303120283
train loss:0.03325592136538938
train loss:0.05037068579689082
train loss:0.11155985694896398
train loss:0.03904481370207969
train loss:0.01887671832988344
train loss:0.0826672945356147
train loss:0.04453183944882744
train loss:0.048515165762098425
train loss:0.052722812869737797
train loss:0.04810551039576017
train loss:0.030428641700214764
train loss:0.0529535558062125
train loss:0.05486747836671755
train loss:0.09155683954779108
train loss:0.057332831715824456
train loss:0.028928831145433632
train loss:0.02680672365285333
train loss:0.018826896547350792
train loss:0.019793219815305085
train loss:0.0264045796768412
train loss:0.0464194321414972
train loss:0.04966163792627646
train loss:0.030267283902418125
train loss:0.019549087026266822
train loss:0.014387804049104542
train loss:0.024771310646695865
train loss:0.03641998754536126
train loss:0.031685443861481
train loss:0.03705904029511361
train loss:0.020471563634499614
train loss:0.0325679069204044
train loss:0.03291080048582216
=== epoch:20, train acc:0.988, test acc:0.942 ===
train loss:0.015814635277280713
train loss:0.01929939093414293
train loss:0.01183614305452695
train loss:0.03959104006546819
train loss:0.03897238997199425
train loss:0.036614662774287425
train loss:0.014779513485173115
train loss:0.04934400867815572
train loss:0.03109043595368753
train loss:0.06903078297625587
train loss:0.03686527776017832
train loss:0.01957441751951806
train loss:0.06874933209369594
train loss:0.02501771160084827
train loss:0.031422096099443894
train loss:0.03692843777212029
train loss:0.02169883774973342
train loss:0.04627787316361126
train loss:0.01555781439363648
train loss:0.04071224473621221
train loss:0.025051018826830692
train loss:0.017072904177402485
train loss:0.04719899627879735
train loss:0.045368437152155765
train loss:0.02237027219418736
train loss:0.0634135517994998
train loss:0.026674109912409465
train loss:0.046560633584446016
train loss:0.027365958709263945
train loss:0.021242161208396188
train loss:0.07274203228900929
train loss:0.024209773426023803
train loss:0.016003202433504435
train loss:0.03741275046054873
train loss:0.013261081349073577
train loss:0.031236862498659458
train loss:0.03383013160468497
train loss:0.028806273269769225
train loss:0.05294641982299586
train loss:0.032768198720734054
train loss:0.033569877543873944
train loss:0.04368741604519876
train loss:0.027301564237345078
train loss:0.01883255354954805
train loss:0.012847249749627284
train loss:0.12100951069613253
train loss:0.0213372650916589
train loss:0.010811832455790003
train loss:0.020805270129987074
=============== Final Test Accuracy ===============
test acc:0.942
Saved Network Parameters!

결과1


CNN 시각화하기

import numpy as np
import matplotlib.pyplot as plt
from simple_convnet import SimpleConvNet

def filter_show(filters, nx=8, margin=3, scale=10):
    """
    c.f. https://gist.github.com/aidiary/07d530d5e08011832b12#file-draw_weight-py
    """
    FN, C, FH, FW = filters.shape
    ny = int(np.ceil(FN / nx))

    fig = plt.figure()
    fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)

    for i in range(FN):
        ax = fig.add_subplot(ny, nx, i+1, xticks=[], yticks=[])
        ax.imshow(filters[i, 0], cmap=plt.cm.gray_r, interpolation='nearest')
    plt.show()


network = SimpleConvNet()
# 무작위(랜덤) 초기화 후의 가중치
filter_show(network.params['W1'])

# 학습된 가중치
network.load_params("params.pkl")
filter_show(network.params['W1'])

결과2

결과3


대표적인 CNN

  • CNN의 원조인 LeNet(1988)
    • 합성곱 계층과 풀링 계층(정확히는 단순히 원소를 줄이기만 하는 서브샘플링 계층)을 반복
    • 마지막으로 완전연결 계층을 거치면서 결과 출력
    • 활성화 함수로 시그모이드 함수 사용(현재는 주로 ReLU)
    • 서브샘플링을 하여 중간 데이터의 크기를 줄임(현재는 최대 풀링 활용)
  • 딥러닝이 주목받도록 이끈 AlexNet(2012)
    • LeNet에서 큰 구조는 바뀌지 않았다.
    • 활성화 함수로 ReLU 활용
    • LRN(local response normalization)이라는 국소적 정규화를 실시하는 계층 이용
    • 드롭아웃 사용