其实pytorch的函数libtorch都有,只是写法上有些出入。libtorch的官方文档链接:
https://pytorch.org/cppdocs/api/library_root.html
只是官方文档只是类似与函数申明,没有告诉干嘛的,只能通过函数名字猜了。比如我要一个一个函数和已知的一个torch::Tensor变量形状一样,只是填充指定的数值,我记得在哪里看到过的有个full开头的函数,然后我就搜素full,然后找到一个函数full_like好像是我需要的。(见0)
目录
- 调试技巧:
- CMakeLists.txt
- 0.torch::full_like
- 1.创建tensor torch::rand torch::empty torch::ones
- 2.拼接tensor torch::cat
- 3.torch的切片操作 【select(浅拷贝)】【index_select 深拷贝)】【index 深拷贝】【slice 浅拷贝】
调试技巧:
1 2 3 | torch::Tensor box_1 = torch::rand({5,4}); std::cout<<box_1<<std::endl; //可以打印出数值 box_1.print();//可以打印形状 |
CMakeLists.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | cmake_minimum_required(VERSION 3.0 FATAL_ERROR) project(main) SET(CMAKE_BUILD_TYPE "Debug") set(CMAKE_PREFIX_PATH "/data_2/everyday/0429/pytorch/torch") find_package(Torch REQUIRED) set(CMAKE_PREFIX_PATH "/home/yhl/software_install/opencv3.2") find_package(OpenCV REQUIRED) add_executable(main main.cpp) target_link_libraries(main "${TORCH_LIBRARIES}") target_link_libraries(main ${OpenCV_LIBS}) set_property(TARGET main PROPERTY CXX_STANDARD 11) |
0.torch::full_like
static Tensor at::full_like(const Tensor &self, Scalar fill_value, const TensorOptions &options = {}, c10::optional
然后就自己试:
1 2 3 4 5 6 7 8 9 10 11 12 | #include <iostream> #include "torch/script.h" #include "torch/torch.h" using namespace std; int main() { torch::Tensor tmp_1 = torch::rand({2,3}); torch::Tensor tmp_2 = torch::full_like(tmp_1,1); cout<<tmp_1<<endl; cout<<tmp_2<<endl; } |
打印的结果如下:
0.8465 0.5771 0.4404
0.9805 0.8665 0.7807
[ Variable[CPUFloatType]{2,3} ]
1 1 1
1 1 1
[ Variable[CPUFloatType]{2,3} ]
1.创建tensor torch::rand torch::empty torch::ones
1.1 torch::rand
1 | torch::Tensor input = torch::rand({ 1,3,2,3 }); |
(1,1,.,.) =
0.5943 0.4822 0.6663
0.7099 0.0374 0.9833
(1,2,.,.) =
0.4384 0.4567 0.2143
0.3967 0.4999 0.9196
(1,3,.,.) =
0.2467 0.5066 0.8654
0.7873 0.4758 0.3718
[ Variable[CPUFloatType]{1,3,2,3} ]
1.2 torch::empty
1 2 | torch::Tensor a = torch::empty({2, 4}); std::cout << a << std::endl; |
7.0374e+22 5.7886e+22 6.7120e+22 6.7331e+22
6.7120e+22 1.8515e+28 7.3867e+20 9.2358e-01
[ Variable[CPUFloatType]{2,4} ]
1.3 torch::ones
1 2 | torch::Tensor a = torch::ones({2, 4}); std::cout << a<< std::endl; |
1 1 1 1
1 1 1 1
[ Variable[CPUFloatType]{2,4} ]
2.拼接tensor torch::cat
2.1 按列拼接
1 2 3 4 5 6 7 | torch::Tensor a = torch::rand({2,3}); torch::Tensor b = torch::rand({2,1}); torch::Tensor cat_1 = torch::cat({a,b},1);//按列拼接--》》前提是行数需要一样 std::cout<<a<<std::endl; std::cout<<b<<std::endl; std::cout<<cat_1<<std::endl; |
0.3551 0.7215 0.3603
0.1188 0.4577 0.2201
[ Variable[CPUFloatType]{2,3} ]
0.5876
0.3040
[ Variable[CPUFloatType]{2,1} ]
0.3551 0.7215 0.3603 0.5876
0.1188 0.4577 0.2201 0.3040
[ Variable[CPUFloatType]{2,4} ]
注意:如果行数不一样会报如下错误
terminate called after throwing an instance of 'std::runtime_error'
what(): invalid argument 0: Sizes of tensors must match except in dimension 1. Got 2 and 4 in dimension 0 at /data_2/everyday/0429/pytorch/aten/src/TH/generic/THTensor.cpp:689
2.2 按行拼接
1 2 3 4 5 6 7 | torch::Tensor a = torch::rand({2,3}); torch::Tensor b = torch::rand({1,3}); torch::Tensor cat_1 = torch::cat({a,b},0); std::cout<<a<<std::endl; std::cout<<b<<std::endl; std::cout<<cat_1<<std::endl; |
0.0004 0.7852 0.4586
0.1612 0.6524 0.7655
[ Variable[CPUFloatType]{2,3} ]
0.5999 0.5445 0.2152
[ Variable[CPUFloatType]{1,3} ]
0.0004 0.7852 0.4586
0.1612 0.6524 0.7655
0.5999 0.5445 0.2152
[ Variable[CPUFloatType]{3,3} ]
2.3 其他例子
1 2 3 4 5 | torch::Tensor box_1 = torch::rand({5,4}); torch::Tensor score_1 = torch::rand({5,1}); torch::Tensor label_1 = torch::rand({5,1}); torch::Tensor result_1 = torch::cat({box_1,score_1,label_1},1); result_1.print(); |
[Variable[CPUFloatType] [5, 6]]
3.torch的切片操作 【select(浅拷贝)】【index_select 深拷贝)】【index 深拷贝】【slice 浅拷贝】
select【浅拷贝】只能指定取某一行或某一列
index【深拷贝】只能指定取某一行
index_select【深拷贝】可以按行或按列,指定多行或多列
slice【浅拷贝】 连续的行或列
3.1 inline Tensor Tensor::select(int64_t dim, int64_t index) ;好像只能整2维的。第一个参数是维度,0是取行,1是取 列,第二个参数是索引的序号
3.1.1 select//按行取
1 2 3 4 | torch::Tensor a = torch::rand({2,3}); std::cout<<a<<std::endl; torch::Tensor b = a.select(0,1);//按行取 std::cout<<b<<std::endl; |
0.6201 0.7021 0.1975
0.3080 0.6304 0.1558
[ Variable[CPUFloatType]{2,3} ]
0.3080
0.6304
0.1558
[ Variable[CPUFloatType]{3} ]
3.1.2 select//按列取
1 2 3 4 5 | torch::Tensor a = torch::rand({2,3}); std::cout<<a<<std::endl; torch::Tensor b = a.select(1,1); std::cout<<b<<std::endl; |
0.8295 0.9871 0.1287
0.8466 0.7719 0.2354
[ Variable[CPUFloatType]{2,3} ]
0.9871
0.7719
[ Variable[CPUFloatType]{2} ]
注意:这里是浅拷贝,就是改变b,同时a的值也会同样的改变
3.1.3 select浅拷贝
1 2 3 4 5 6 7 8 9 | torch::Tensor a = torch::rand({2,3}); std::cout<<a<<std::endl; torch::Tensor b = a.select(1,1); std::cout<<b<<std::endl; b[0] = 0.0; std::cout<<a<<std::endl; std::cout<<b<<std::endl; |
0.0938 0.2861 0.0089
0.3481 0.5806 0.3711
[ Variable[CPUFloatType]{2,3} ]
0.2861
0.5806
[ Variable[CPUFloatType]{2} ]
0.0938 0.0000 0.0089
0.3481 0.5806 0.3711
[ Variable[CPUFloatType]{2,3} ]
0.0000
0.5806
[ Variable[CPUFloatType]{2} ]
可以看到,b[0] = 0.0;然后a,b的对应位置都为0了。浅拷贝!!
3.2 inline Tensor Tensor::index_select(Dimname dim, const Tensor & index) //同样的,dim0表示按行,1表示按列 index表示取的行号或者列号,这里
比较奇怪,index一定要是toType(torch::kLong)这种类型的。还有一个奇怪的地方是我准备用数组导入tensor的,发现idx全是0,原因未知
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | torch::Tensor a = torch::rand({2,6}); std::cout<<a<<std::endl; slice torch::Tensor idx = torch::empty({4}).toType(torch::kLong); idx[0]=0; idx[1]=2; idx[2]=4; idx[3]=1; // int idx_data[4] = {1,3,2,4}; // torch::Tensor idx = torch::from_blob(idx_data,{4}).toType(torch::kLong);//idx全是0 ????????????????? std::cout<<idx<<std::endl; torch::Tensor b = a.index_select(1,idx); std::cout<<b<<std::endl; |
0.4956 0.5028 0.0863 0.9464 0.6714 0.5348
0.3523 0.2245 0.0924 0.7088 0.6913 0.2237
[ Variable[CPUFloatType]{2,6} ]
0
2
4
1
[ Variable[CPULongType]{4} ]
0.4956 0.0863 0.6714 0.5028
0.3523 0.0924 0.6913 0.2245
[ Variable[CPUFloatType]{2,4} ]
3.2.2 index_select深拷贝
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | torch::Tensor a = torch::rand({2,6}); std::cout<<a<<std::endl; torch::Tensor idx = torch::empty({4}).toType(torch::kLong); idx[0]=0; idx[1]=2; idx[2]=4; idx[3]=1; // int idx_data[4] = {1,3,2,4}; // torch::Tensor idx = torch::from_blob(idx_data,{4}).toType(torch::kLong); std::cout<<idx<<std::endl; torch::Tensor b = a.index_select(1,idx); std::cout<<b<<std::endl; b[0][0]=0.0; std::cout<<a<<std::endl; std::cout<<b<<std::endl; |
0.6118 0.6078 0.5052 0.9489 0.6201 0.8975
0.0901 0.2040 0.1452 0.6452 0.9593 0.7454
[ Variable[CPUFloatType]{2,6} ]
0
2
4
1
[ Variable[CPULongType]{4} ]
0.6118 0.5052 0.6201 0.6078
0.0901 0.1452 0.9593 0.2040
[ Variable[CPUFloatType]{2,4} ]
0.6118 0.6078 0.5052 0.9489 0.6201 0.8975
0.0901 0.2040 0.1452 0.6452 0.9593 0.7454
[ Variable[CPUFloatType]{2,6} ]
0.0000 0.5052 0.6201 0.6078
0.0901 0.1452 0.9593 0.2040
[ Variable[CPUFloatType]{2,4} ]
3.3 index inline Tensor Tensor::index(TensorList indices)
这个函数实验下来,只能按行取,且是深拷贝
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | torch::Tensor a = torch::rand({2,6}); std::cout<<a<<std::endl; torch::Tensor idx_1 = torch::empty({2}).toType(torch::kLong); idx_1[0]=0; idx_1[1]=1; torch::Tensor bb = a.index(idx_1); bb[0][0]=0; std::cout<<bb<<std::endl; std::cout<<a<<std::endl; |
0.1349 0.8087 0.2659 0.3364 0.0202 0.4498
0.4785 0.4274 0.9348 0.0437 0.6732 0.3174
[ Variable[CPUFloatType]{2,6} ]
0.0000 0.8087 0.2659 0.3364 0.0202 0.4498
0.4785 0.4274 0.9348 0.0437 0.6732 0.3174
[ Variable[CPUFloatType]{2,6} ]
0.1349 0.8087 0.2659 0.3364 0.0202 0.4498
0.4785 0.4274 0.9348 0.0437 0.6732 0.3174
[ Variable[CPUFloatType]{2,6} ]
3.4 slice inline Tensor Tensor::slice(int64_t dim, int64_t start, int64_t end, int64_t step) //dim0表示按行取,1表示按列取,从start开始,到end(不含)结束
可以看到结果,是浅拷贝!!!
1 2 3 4 5 6 7 8 9 10 11 12 | torch::Tensor a = torch::rand({2,6}); std::cout<<a<<std::endl; torch::Tensor b = a.slice(0,0,1); torch::Tensor c = a.slice(1,0,3); b[0][0]=0.0; std::cout<<b<<std::endl; std::cout<<c<<std::endl; std::cout<<a<<std::endl; |
0.8270 0.7952 0.3743 0.7992 0.9093 0.5945
0.3764 0.8419 0.7977 0.4150 0.8531 0.9207
[ Variable[CPUFloatType]{2,6} ]
0.0000 0.7952 0.3743 0.7992 0.9093 0.5945
[ Variable[CPUFloatType]{1,6} ]
0.0000 0.7952 0.3743
0.3764 0.8419 0.7977
[ Variable[CPUFloatType]{2,3} ]
0.0000 0.7952 0.3743 0.7992 0.9093 0.5945
0.3764 0.8419 0.7977 0.4150 0.8531 0.9207
[ Variable[CPUFloatType]{2,6} ]