如何在 PyTorch 中計算均方誤差(平方 L2 範數)?
均方誤差計算的是輸入和目標(預測值和實際值)之間的平方差的平均值。要在 PyTorch 中計算均方誤差,我們可以應用 **torch.nn** 模組提供的 **MSELoss()** 函式。它建立一個用於度量均方誤差的標準。它也稱為 **平方 L2 範數**。
實際值和預測值都是具有相同元素數量的張量。這兩個張量可以具有任意數量的維度。此函式返回一個標量值的張量。它是 **torch.nn** 模組提供的損失函式的一種。損失函式用於透過最小化損失來最佳化深度神經網路。
語法
torch.nn.MSELoss()
步驟
要測量均方誤差,可以按照以下步驟操作
匯入所需的庫。在以下所有示例中,所需的 Python 庫為 **torch**。確保您已安裝它。
import torch
建立輸入和目標張量並列印它們。
input = torch.tensor([0.10, 0.20, 0.40, 0.50]) target = torch.tensor([0.09, 0.2, 0.38, 0.52])
建立一個標準來測量均方誤差
mse = nn.MSELoss()
計算均方誤差(損失)並列印它。
output = mse(input, target) print("MSE loss:", output)
示例 1
在此程式中,我們測量輸入和目標張量之間的均方誤差。輸入和目標張量都是 1D 張量。
# Import the required libraries import torch import torch.nn as nn # define the input and target tensors input = torch.tensor([0.10, 0.20, 0.40, 0.50]) target = torch.tensor([0.09, 0.2, 0.38, 0.52]) # print input and target tensors print("Input Tensor:
", input) print("Target Tensor:
", target) # create a criterion to measure the mean squared error mse = nn.MSELoss() # compute the loss (mean squared error) output = mse(input, target) # output.backward() print("MSE loss:", output)
輸出
Input Tensor: tensor([0.1000, 0.2000, 0.4000, 0.5000]) Target Tensor: tensor([0.0900, 0.2000, 0.3800, 0.5200]) MSE loss: tensor(0.0002)
請注意,均方誤差是一個標量值。
示例 2
在此程式中,我們測量輸入和目標張量之間的均方誤差。輸入和目標張量都是 2D 張量。
# Import the required libraries import torch import torch.nn as nn # define the input and target tensors input = torch.randn(3, 4) target = torch.randn(3, 4) # print input and target tensors print("Input Tensor:
", input) print("Target Tensor:
", target) # create a criterion to measure the mean squared error mse = nn.MSELoss() # compute the loss (mean squared error) output = mse(input, target) # output.backward() print("MSE loss:", output)
輸出
Input Tensor: tensor([[-1.6413, 0.8950, -1.0392, 0.2382], [-0.3868, 0.2483, 0.9811, -0.9260], [-0.0263, -0.0911, -0.6234, 0.6360]]) Target Tensor: tensor([[-1.6068, 0.7233, -0.0925, -0.3140], [-0.4978, 1.3121, -1.4910, -1.4643], [-2.2589, 0.3073, 0.2038, -1.5656]]) MSE loss: tensor(1.6209)
示例 3
在此程式中,我們測量輸入和目標張量之間的均方誤差。輸入和目標張量都是 2D 張量。輸入張量採用引數 **requires_grad=true**。因此,我們還計算此函式相對於輸入張量的梯度。
# Import the required libraries import torch import torch.nn as nn # define the input and target tensors input = torch.randn(4, 5, requires_grad = True) target = torch.randn(4, 5) # print input and target tensors print("Input Tensor:
", input) print("Target Tensor:
", target) # create a criterion to measure the mean squared error loss = nn.MSELoss() # compute the loss (mean squared error) output = loss(input, target) output.backward() print("MSE loss:", output) print("input.grad:
", input.grad)
輸出
Input Tensor: tensor([[ 0.1813, 0.4199, 1.1768, -0.7068, 0.2960], [ 0.7950, 0.0945, -0.0954, -1.0170, -0.1471], [ 1.2264, 1.7573, 0.9099, 1.3720, -0.9087], [-1.0122, -0.8649, -0.7797, -0.7787, 0.9944]], requires_grad=True) Target Tensor: tensor([[-0.6370, -0.8421, 1.2474, 0.4363, -0.1481], [-0.1500, -1.3141, 0.7349, 0.1184, -2.7065], [-1.0776, 1.3530, 0.6939, -1.3191, 0.7406], [ 0.2058, 0.4765, 0.0695, 1.2146, 1.1519]]) MSE loss: tensor(1.9330, grad_fn=<MseLossBackward>) input.grad: tensor([[ 0.0818, 0.1262, -0.0071, -0.1143, 0.0444], [ 0.0945, 0.1409, -0.0830, -0.1135, 0.2559], [ 0.2304, 0.0404, 0.0216, 0.2691, -0.1649], [-0.1218, -0.1341, -0.0849, -0.1993, -0.0158]])
廣告