如何在PyTorch中對給定的多通道時間、空間或體資料進行上取樣?
時間資料可以表示為一維張量,空間資料可以表示為二維張量,而體資料可以表示為三維張量。torch.nn模組提供的**Upsample**類支援對這些型別的資料進行**上取樣**。但是這些資料必須是**N ☓ C ☓ D (可選) ☓ H (可選) ☓ W (可選)**的形式,其中**N**是minibatch大小,**C**是通道數,**D、H**和**W**分別是資料的深度、高度和寬度。因此,要對時間資料(一維)進行上取樣,需要將其轉換為**N ☓ C ☓ W**的3D形式;空間資料(二維)需要轉換為**N ☓ C ☓ H ☓ W**的4D形式;而體資料(三維)需要轉換為**N ☓ C ☓ D ☓ H ☓ W**的5D形式。
它支援不同的縮放因子和模式。在**三維(時間)**張量上,我們可以應用**mode='linear'**和**'nearest'**。在**四維(空間)**張量上,我們可以應用**mode='nearest','bilinear'**和**'bicubic'**。在**五維(體積)**張量上,我們可以應用**mode='nearest'**和**'trilinear'**。
語法
torch.nn.Upsample()
步驟
您可以使用以下步驟對時間、空間或體資料進行上取樣
匯入所需的庫。在以下所有示例中,所需的Python庫是**torch**。請確保您已安裝它。
import torch
定義時間(3D)、空間(4D)或體積(5D)張量並列印它們。
input = torch.tensor([[1., 2.],[3., 4.]]).view(1,2,2)
print(input.size())
print("Input Tensor:
", input)使用**scale_factor**和**mode**建立一個**Upsample**例項,以對給定的多通道資料進行上取樣。
upsample = torch.nn.Upsample(scale_factor=3, mode='nearest')
使用建立的例項對上面定義的時間、空間或體積張量進行上取樣。
output = upsample(input)
列印以上取樣後的張量。
print("Upsample by a scale_factor=3 with mode='nearest':
",output)示例1
在這個程式中,我們使用不同的**scale_factor**和**mode**對**時間**資料進行上取樣。
# Python program to upsample a 3D (Temporal) tensor
# 3D tensor we can apply mode='linear' and 'nearest'
import torch
# define a tensor and view as a 3D tensor
input = torch.tensor([[1., 2.],[3., 4.]]).view(1,2,2)
print(input.size())
print("Input Tensor:
", input)
# create an instance of Upsample with scale_factor and mode
upsample1 = torch.nn.Upsample(scale_factor=2)
output1 = upsample1(input)
print("Upsample by a scale_factor=2
", output1)
# define upsample with scale_factor and mode
upsample2 = torch.nn.Upsample(scale_factor=3)
output2 = upsample2(input)
print("Upsample by a scale_factor=3 with default mode:
", output2)
upsample2 = torch.nn.Upsample(scale_factor=3, mode='nearest')
output2 = upsample2(input)
print("Upsample by a scale_factor=3 mode='nearest':
", output2)
upsample_linear = torch.nn.Upsample(scale_factor=3, mode='linear')
output_linear = upsample_linear(input)
print("Upsample by a scale_factor=3, mode='linear':
", output_linear)輸出
torch.Size([1, 2, 2]) Input Tensor: tensor([[[1., 2.],[3., 4.]]]) Upsample by a scale_factor=2 tensor([[[1., 1., 2., 2.],[3., 3., 4., 4.]]]) Upsample by a scale_factor=3 with default mode: tensor([[[1., 1., 1., 2., 2., 2.],[3., 3., 3., 4., 4., 4.]]]) Upsample by a scale_factor=3 mode='nearest': tensor([[[1., 1., 1., 2., 2., 2.],[3., 3., 3., 4., 4., 4.]]]) Upsample by a scale_factor=3, mode='linear': tensor([[[1.0000, 1.0000, 1.3333, 1.6667, 2.0000, 2.0000],[3.0000, 3.0000, 3.3333, 3.6667, 4.0000, 4.0000]]])
注意使用不同的**scale_factor**和**mode**時輸出張量之間的差異。
示例2
在下面的Python程式中,我們使用不同的**scale_factor**和**mode**對**四維(空間)**張量進行上取樣。
# Python program to upsample a 4D (Spatial) tensor
# on 4D(Spatial) tensor we can apply mode='nearest', 'bilinear' and 'bicubic'
import torch
# define a tensor and view as a 4D tensor
input = torch.tensor([[1., 2.],[3., 4.]]).view(1,1,2,2)
print(input.size())
print("Input Tensor:
", input)
# upsample using mode='nearest'
upsample_nearest = torch.nn.Upsample(scale_factor=3,
mode='nearest')
output_nearest = upsample_nearest(input)
# upsample using mode='bilinear'
upsample_bilinear = torch.nn.Upsample(scale_factor=3,
mode='bilinear')
output_bilinear = upsample_bilinear(input)
# upsample using mode='bicubic'
upsample_bicubic = torch.nn.Upsample(scale_factor=3, mode='bicubic')
output_bicubic = upsample_bicubic(input)
# display the outputs
print("Upsample by a scale_factor=3, mode='nearest':
", output_nearest)
print("Upsample by a scale_factor=3, mode='bilinear':
",
output_bilinear)
print("Upsample by a scale_factor=3, mode='bicubic':
", output_bicubic)輸出
torch.Size([1, 1, 2, 2]) Input Tensor: tensor([[[[1., 2.],[3., 4.]]]]) Upsample by a scale_factor=3, mode='nearest': tensor([[[[1., 1., 1., 2., 2., 2.], [1., 1., 1., 2., 2., 2.], [1., 1., 1., 2., 2., 2.], [3., 3., 3., 4., 4., 4.], [3., 3., 3., 4., 4., 4.], [3., 3., 3., 4., 4., 4.]]]]) Upsample by a scale_factor=3, mode='bilinear': tensor([[[[1.0000, 1.0000, 1.3333, 1.6667, 2.0000, 2.0000], [1.0000, 1.0000, 1.3333, 1.6667, 2.0000, 2.0000], [1.6667, 1.6667, 2.0000, 2.3333, 2.6667, 2.6667], [2.3333, 2.3333, 2.6667, 3.0000, 3.3333, 3.3333], [3.0000, 3.0000, 3.3333, 3.6667, 4.0000, 4.0000], [3.0000, 3.0000, 3.3333, 3.6667, 4.0000, 4.0000]]]]) Upsample by a scale_factor=3, mode='bicubic': tensor([[[[0.6667, 0.7778, 1.0926, 1.4630, 1.7778, 1.8889], [0.8889, 1.0000, 1.3148, 1.6852, 2.0000, 2.1111], [1.5185, 1.6296, 1.9444, 2.3148, 2.6296, 2.7407], [2.2593, 2.3704, 2.6852, 3.0556, 3.3704, 3.4815], [2.8889, 3.0000, 3.3148, 3.6852, 4.0000, 4.1111], [3.1111, 3.2222, 3.5370, 3.9074, 4.2222, 4.3333]]]])
注意使用不同的**scale_factor**和**mode**時輸出張量之間的差異。
示例3
在這個程式中,我們使用不同的**scale_factor**和**mode**對五維(體積)張量進行上取樣。
# Python program to upsample a 5D (Volumetric) tensor
# on 5D (Volumetric) tensor we can apply mode='nearest' and 'trilinear'
import torch
# define a tensor and view as a 5D tensor
input = torch.tensor([[1., 2.],[3., 4.]]).view(1,1,1,2,2)
print(input.size())
print("Input Tensor:
", input)
# use mode='nearest', factor=2
upsample_nearest = torch.nn.Upsample(scale_factor=2, mode='nearest')
output_nearest = upsample_nearest(input)
print("Upsample by a scale_factor=2, mode='nearest'
",
output_nearest)
# use mode='nearest', factor=3
upsample_nearest = torch.nn.Upsample(scale_factor=3,
mode='nearest')
output_nearest = upsample_nearest(input)
print("Upsample by a scale_factor=3, mode='nearest'
",
output_nearest)
# use mode='trilinear'
upsample_trilinear = torch.nn.Upsample(scale_factor=2,
mode='trilinear')
output_trilinear = upsample_trilinear(input)
print("Upsample by a scale_factor=2, mode='trilinear':
",
output_trilinear)輸出
torch.Size([1, 1, 1, 2, 2]) Input Tensor: tensor([[[[[1., 2.],[3., 4.]]]]]) Upsample by a scale_factor=2, mode='nearest' tensor([[[[[1., 1., 2., 2.], [1., 1., 2., 2.], [3., 3., 4., 4.], [3., 3., 4., 4.]], [[1., 1., 2., 2.], [1., 1., 2., 2.], [3., 3., 4., 4.], [3., 3., 4., 4.]]]]]) Upsample by a scale_factor=3, mode='nearest' tensor([[[[[1., 1., 1., 2., 2., 2.], [1., 1., 1., 2., 2., 2.], [1., 1., 1., 2., 2., 2.], [3., 3., 3., 4., 4., 4.], [3., 3., 3., 4., 4., 4.], [3., 3., 3., 4., 4., 4.]], [[1., 1., 1., 2., 2., 2.], [1., 1., 1., 2., 2., 2.], [1., 1., 1., 2., 2., 2.], [3., 3., 3., 4., 4., 4.], [3., 3., 3., 4., 4., 4.], [3., 3., 3., 4., 4., 4.]], [[1., 1., 1., 2., 2., 2.], [1., 1., 1., 2., 2., 2.], [1., 1., 1., 2., 2., 2.], [3., 3., 3., 4., 4., 4.], [3., 3., 3., 4., 4., 4.], [3., 3., 3., 4., 4., 4.]]]]]) Upsample by a scale_factor=2, mode='trilinear': tensor([[[[[1.0000, 1.2500, 1.7500, 2.0000], [1.5000, 1.7500, 2.2500, 2.5000], [2.5000, 2.7500, 3.2500, 3.5000], [3.0000, 3.2500, 3.7500, 4.0000]], [[1.0000, 1.2500, 1.7500, 2.0000], [1.5000, 1.7500, 2.2500, 2.5000], [2.5000, 2.7500, 3.2500, 3.5000], [3.0000, 3.2500, 3.7500, 4.0000]]]]])
資料結構
網路
關係資料庫管理系統(RDBMS)
作業系統
Java
iOS
HTML
CSS
Android
Python
C語言程式設計
C++
C#
MongoDB
MySQL
Javascript
PHP