[PyTorch] Evaluating 및 Predicting

이전 블로그를 이어서 진행해본다.

Evaluation

model.train() 모드로 변한 것 처럼 평가할 때는 model.eval() 으로 설정한다.

1
2
3
4
5
6
7
8
9
10
11
# Test mode
# batch norm이나 dropout 등을 train mode 변환
model.eval()

# Out
Net(
(conv1): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1))
(conv2): Conv2d(20, 50, kernel_size=(5, 5), stride=(1, 1))
(fc1): Linear(in_features=800, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
)

torch.no_grad() 함수는 autograd engine, 즉 backpropagatin 이나 기울기 계산 등을 꺼서 memory usage 를 줄이고 속도를 높인다.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
test_loss = 0
correct = 0

with torch.no_grad():
data, target = next(iter(test_loader))
data, target = data.to(device), target.to(device)
output = model(data)

test_loss += F.nll_loss(output, target, reduction='sum').item()

pred = output.argmax(dim=1, keepdim=True)
correct = pred.eq(target.view_as(pred)).sum().item()

# Out
test_loss : 29.74889373779297
correct : 54

test_loss /= len(test_loader.dataset)
=> 0.0029748893737792967

정리

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
model.eval()

test_loss = 0
correct = 0

with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item()
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()

test0_loss /= len(test_loader.dataset)

print('\nTest set: Average Loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset)))

# Out
Test set: Average Loss: 0.4799, Accuracy: 8660/10000 (87%)
Share