[問題] 驗證的accuarcy完全沒變
在練習用VGG16做分類,我用這篇的程式碼下去改的https://reurl.cc/RvvG9x
VGG16的部分和他這篇不一樣不是用預設而是用寫的
問題在於,進行訓練的時候Testing acc一直都是0.500,而acc都在0.5附近上下跳動
也就是說好像根本沒有學習到
進行訓練的程式碼如下:
(基本與那篇相同,這裡只是將原文中的C改成model,並且把label的type改成tensor)
if __name__ == '__main__':
for epoch in range(epochs):
start_time = time.time()
iter = 0
correct_train, total_train = 0, 0
correct_test, total_test = 0, 0
train_loss_C = 0.0
model.train() # 設定 train 或 eval
print('epoch: ' + str(epoch + 1) + ' / ' + str(epochs))
# ---------------------------
# Training Stage
# ---------------------------
for i, (x, label) in enumerate(train_dataloader) :
label = label.type(torch.LongTensor)
x, label = x.to(device), label.to(device)
optimizer_C.zero_grad() # 清空梯度
train_output = model(x)
train_loss = criterion(train_output, label) # 計算 loss
train_loss.backward() # 將 loss 反向傳播
optimizer_C.step() # 更新權重
# 計算訓練資料的準確度 (correct_train / total_train)
_, predicted = torch.max(train_output.data, 1)
total_train += label.size(0)
correct_train += (predicted == label).sum()
train_loss_C += train_loss.item()
iter += 1
print('Training epoch: %d / loss_C: %.3f | acc: %.3f' % \
(epoch + 1, train_loss_C / iter, correct_train / total_train))
# --------------------------
# Testing Stage
# --------------------------
model.eval() # 設定 train 或 eval
for i, (x, label) in enumerate(test_dataloader) :
with torch.no_grad():
label = label.type(torch.LongTensor)
x, label = x.to(device), label.to(device)
test_output = model(x)
test_loss = criterion(test_output, label) # 計算 loss
# 計算測試資料的準確度 (correct_test / total_test)
_, predicted = torch.max(test_output.data, 1)
total_test += label.size(0)
correct_test += (predicted == label).sum()
print('Testing acc: %.3f' % (correct_test / total_test))
train_acc.append(100 * (correct_train / total_train).cpu()) #
training accuracy
test_acc.append(100 * (correct_test / total_test).cpu()) # testing
accuracy
loss_epoch_C.append((train_loss_C / iter)) # loss
end_time = time.time()
print('Cost %.3f(secs)' % (end_time - start_time))
指定epochs只跑3次,得到以下結果:
epoch: 1 / 3
Training epoch: 1 / loss_C: 0.693 | acc: 0.514
Testing acc: 0.500
Cost 21.822(secs)
epoch: 2 / 3
Training epoch: 2 / loss_C: 0.693 | acc: 0.475
Testing acc: 0.500
Cost 23.002(secs)
epoch: 3 / 3
Training epoch: 3 / loss_C: 0.693 | acc: 0.476
Testing acc: 0.500
Cost 22.669(secs)
--
※ 發信站: 批踢踢實業坊(ptt.cc), 來自: 42.77.144.160 (臺灣)
※ 文章網址: https://www.ptt.cc/bbs/DataScience/M.1678336929.A.EBC.html
推
03/09 17:53,
1年前
, 1F
03/09 17:53, 1F
→
03/09 17:53,
1年前
, 2F
03/09 17:53, 2F
推
03/09 19:58,
1年前
, 3F
03/09 19:58, 3F
→
03/09 22:53,
1年前
, 4F
03/09 22:53, 4F
→
03/09 23:19,
1年前
, 5F
03/09 23:19, 5F
→
03/09 23:19,
1年前
, 6F
03/09 23:19, 6F
→
03/09 23:19,
1年前
, 7F
03/09 23:19, 7F
→
03/09 23:19,
1年前
, 8F
03/09 23:19, 8F
→
03/09 23:19,
1年前
, 9F
03/09 23:19, 9F
→
03/10 00:27,
1年前
, 10F
03/10 00:27, 10F
→
03/10 19:24,
1年前
, 11F
03/10 19:24, 11F
→
03/10 19:25,
1年前
, 12F
03/10 19:25, 12F
→
03/10 23:28,
1年前
, 13F
03/10 23:28, 13F
推
03/11 08:04,
1年前
, 14F
03/11 08:04, 14F
→
03/11 08:04,
1年前
, 15F
03/11 08:04, 15F
推
03/11 08:07,
1年前
, 16F
03/11 08:07, 16F
推
03/11 08:37,
1年前
, 17F
03/11 08:37, 17F
→
03/11 13:35,
1年前
, 18F
03/11 13:35, 18F
→
03/13 07:19,
1年前
, 19F
03/13 07:19, 19F
推
03/18 07:06,
1年前
, 20F
03/18 07:06, 20F
→
03/18 07:06,
1年前
, 21F
03/18 07:06, 21F
→
03/18 07:06,
1年前
, 22F
03/18 07:06, 22F
→
03/18 08:26,
1年前
, 23F
03/18 08:26, 23F
→
03/18 08:28,
1年前
, 24F
03/18 08:28, 24F
→
04/28 02:41, , 25F
04/28 02:41, 25F
→
04/28 02:41, , 26F
04/28 02:41, 26F
→
04/28 02:41, , 27F
04/28 02:41, 27F
→
04/28 02:41, , 28F
04/28 02:41, 28F
→
04/28 02:41, , 29F
04/28 02:41, 29F
→
04/28 02:41, , 30F
04/28 02:41, 30F
→
04/28 02:41, , 31F
04/28 02:41, 31F
→
04/28 02:41, , 32F
04/28 02:41, 32F
→
04/28 02:41, , 33F
04/28 02:41, 33F
→
04/28 02:41, , 34F
04/28 02:41, 34F
→
04/28 02:41, , 35F
04/28 02:41, 35F
→
04/28 02:51, , 36F
04/28 02:51, 36F
DataScience 近期熱門文章
PTT數位生活區 即時熱門文章