Notebook

切换cuda版本以及cudnn版本

ln -s 源 链接
ls -al 可以看到链接的目的地
装cuda10.0:在centos上装不要用rpm文件装,会报依赖问题,要用.run装
装cuda不同版本,一般是创建一个软连接到/usr/local/cuda;然后在~/.bashrc中添加

export PATH=$PATH:/usr/local/cuda/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64

如果报错
missing CUDA_INCLUDE_DIRS:
则在~/.bashrc中添加

export CUDA_INCLUDE_DIRS=/usr/local/cuda/extras/CUPTI/include

以后切换不同的cuda就官网下载对应的文件,然后装好之后,将以前的软连接删除,然后链接到新的cuda文件夹就好,然后nvcc -V查看cuda版本

ubuntu升级显卡驱动
sudo add-apt-repository ppa:graphics-drivers/ppa && sudo apt update
在图形界面这么装比较放心
装了个430开不了机了,卸载重装
[cuda与driver对应表格]https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html
![图片说明](https://uploadfiles.nowcoder.com/images/20190902/1485076_1567415613939_3D5C15380648E8E752970E2BEB93517C "图片标题")

https://blog.csdn.net/EliminatedAcmer/article/details/80528980
ubuntu装显卡驱动
装完cuda9.2,开机闪烁光标,
不能偷懒用
![图片说明](https://uploadfiles.nowcoder.com/images/20190922/1485076_1569153260238_265357489C735FC2B7651C1E258DB9FD "图片标题")
这里的装,装完必有问题

ffmpeg 上下拼接视频
ffmpeg -i 2-0001.mkv -i 2-0001.mkv -filter_complex vstack=inputs=2 -t 2 output.mp4

2019年10月12日
编译的时候,如果某些库链不上
ldd
locate
没有的话装库:https://pkgs.org/
可以在此网站搜索
export LD_LIBRARY_PATH=/data/zhuyinghao/vino_inference_sdk/vino/cv/py3/sys:$LD_LIBRARY_PATH

https://answers.opencv.org/question/145214/convert-cvmat-to-stdvector-without-copying/
![图片说明](https://uploadfiles.nowcoder.com/images/20191017/1485076_1571300489809_AB5568975C626A96181CCA795AEC8522 "图片标题")

time ./ffmpeg -i ${input} -vf qrestore2=model=torch_yuv:model_path=./qrestore_model/yuv_train_net_70.pt:device=cpu:denoise=0 -loglevel info -qscale 0 -y ./${name}_torch_yuv.mp4 &>> cal_time.log
time ./ffmpeg -i ${input} -vf qrestore2=model=torch_rgb:model_path=./qrestore_model/model_0422_190_G.pt:device=cpu:denoise=0 -loglevel info -qscale 0 -y ./${name}_torch_rgb.mp4 &>> cal_time.log
time ./ffmpeg -i ${input} -vf qrestore2=model=vino_rgb:model_path=/data/zhuyinghao/qrestore_inference/code/qrestore2.0_cpp_use_libtorch/qrestore_vino/model/model_190_net_G.xml:vino_extension=/opt/vino_inference_sdk/vino/lib64/libcpu_extension.so:device=cpu:denoise=0 -loglevel info -qscale 0 -y ./${name}_torch_vino_rgb.mp4 &>> cal_time.log

libav编译时出错:
libavcodec/libx264.c: In function ‘X264_frame’:
libavcodec/libx264.c:246:9: error: ‘x264_bit_depth’ undeclared (first use in this function)
if (x264_bit_depth > 8)
^
libavcodec/libx264.c:246:9: note: each undeclared identifier is reported only once for each function it appears in
libavcodec/libx264.c: In function ‘X264_init_static’:
libavcodec/libx264.c:707:9: error: ‘x264_bit_depth’ undeclared (first use in this function)
if (x264_bit_depth == 8)
看了一圈没几个正经解决方案,参考x264对ffmpeg的补丁解决 http://git.videolan.org/?p=ffmpeg.git;a=patch;h=2a111c99a60fdf4fe5eea2b073901630190c6c93

scl enable devtoolset-3 bash

-DCMAKE_BUILD_TYPE=Release -DCUDA_nppi_LIBRARY=true

cmake -D WITH_CUDA=ON WITH_CUBLAS=ON WITH_CUFFT=ON WITH_NVCUVIDs=ON CUDA_FAST_MATH=ON CUDA_GENERATION=Auto -D OPENCV_EXTRA_MODULES_PATH=../opencv_contrib/modules ..

-- Registering hook 'INIT_MODULE_SOURCES_opencv_dnn': /data/opencv_gpu/opencv-master/modules/dnn/cmake/hooks/INIT_MODULE_SOURCES_opencv_dnn.cmake
CMake Error at modules/dnn/CMakeLists.txt:97 (message):
  CUDA backend for DNN module requires CC 5.3 or higher.  Please remove
  unsupported architectures from CUDA_ARCH_BIN option.

解决:
cmake文件中查找__cuda_arch_bin
将5.3以下的去掉

 if(CUDA_VERSION VERSION_LESS "9.0")
        #set(__cuda_arch_bin "3.0 3.5 3.7 5.0 5.2 6.0 6.1")
        set(__cuda_arch_bin "6.0 6.1")
        #set(CUDA_ARCH_BIN "3.0 3.5 3.7 5.0 5.2 6.0 6.1")
 elseif(CUDA_VERSION VERSION_LESS "10.0")
        set(__cuda_arch_bin "6.0 6.1 7.0")
        #set(CUDA_ARCH_BIN "3.0 3.5 3.7 5.0 5.2 6.0 6.1 7.0")
 else()
        set(__cuda_arch_bin "6.0 6.1 7.0 7.5")
        #set(CUDA_ARCH_BIN "3.0 3.5 3.7 5.0 5.2 6.0 6.1 7.0 7.5")
 endif()

cmake成功,但编译报错

后来cmake时指定CUDA_ARCH_BIN
cmake -D WITH_CUDA=ON WITH_CUBLAS=ON WITH_CUFFT=ON WITH_NVCUVIDs=ON CUDA_FAST_MATH=ON -D CUDA_ARCH_BIN=6.1 CUDA_GENERATION=Auto -D OPENCV_EXTRA_MODULES_PATH=../opencv_contrib/modules ..

cmake -D WITH_CUDA=ON WITH_CUBLAS=ON WITH_CUFFT=ON WITH_NVCUVIDs=ON CUDA_FAST_MATH=ON -D CUDA_ARCH_BIN=6.1 CUDA_GENERATION=Auto -DCUDA_nppi_LIBRARY=true ..

Please make sure that

  • PATH includes /data/cuda-10.0/bin
  • LD_LIBRARY_PATH includes /data/cuda-10.0/lib64, or, add /data/cuda-10.0/lib64 to /etc/ld.so.conf and run ldconfig as root

安装cmake
wget https://cmake.org/files/v3.6/cmake-3.6.2.tar.gz
tar xvf cmake-3.6.2.tar.gz && cd cmake-3.6.2/
./bootstrap
gmake -j8
gmake install
/usr/local/bin/cmake --version
删除原来cmake版本,建立软连接,测试
yum remove cmake -y
ln -s /usr/local/bin/cmake /usr/bin/
cmake --version

编译出现这个问题:
centos cmake ./cmVersionConfig.h:7:1: error: missing terminating " character

发现时下载的包出现了问题,一开始我是下的zip,不管什么版本都会出现这个问题,换成tar.gz问题结解决

select input_param,sharpness from Framework_quality_check where sharpness>=0.9 and sharpness<=1 and start_time between '2019-09-05' and '2019-10-31' and input_param like '%zongyi%' and input_param like '%mkv%' and input_param like '%fileLocation%'

basename example.tar.a.b.c.gz .c.gz
# => example.tar.a.b

FILE="example.tar.gz"

echo "${FILE%%.*}"     取头   example 
# => example

echo "${FILE%.*}"      去尾   example.tar.a.b.c
# => example.tar

echo "${FILE#*.}"      去头   tar.a.b.c.gz
# => tar.gz

echo "${FILE##*.}"     取尾   gz
# => gz

# 在bash中可以这么写
filename=$(basename "$fullfile")   
extension="${filename##*.}"
filename="${filename%.*}"
————————————————
版权声明:本文为CSDN博主「RonnyJiang」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/RonnyJiang/article/details/52386121

real 0m0.109s
user 0m0.080s
sys 0m0.029s
bjwyx-6min.mkv

real 91m58.011s
user 269m10.084s
sys 18m30.749s
bjwyx-6min_restore.mp4

real 87m1.553s
user 287m2.318s
sys 17m56.803s
jd-28-6min.mkv

real 58m43.122s
user 149m8.490s
sys 11m24.640s
jd-28-6min_restore.mp4

real 43m23.533s
user 111m13.592s
sys 8m45.605s
jqdsy-6min.mkv

没有denoise的gpu运行时间

6min-restore

real    0m12.370s
user    0m0.089s
sys    0m0.049s
bjwyx-6min.mkv

real    84m51.770s
user    309m48.195s
sys    18m10.691s
bjwyx-6min_restore.mp4

real    79m12.773s
user    325m5.459s
sys    17m58.529s
bjwyx-6min_restore_restore.mp4

real    82m40.649s
user    340m15.740s
sys    18m36.421s
jd-28-6min.mkv

real    51m20.795s
user    174m21.728s
sys    9m16.073s
jd-28-6min_restore.mp4

real    51m9.927s
user    180m22.828s
sys    9m10.829s
jd-28-6min_restore_restore.mp4

real    37m47.379s
user    133m48.595s
sys    6m44.176s
jqdsy-6min.mkv

real    51m31.423s
user    212m46.812s
sys    9m48.838s
jqdsy-6min_restore.mp4

real    0m0.161s
user    0m0.079s
sys    0m0.031s
nezha_6min.mkv

real    50m18.672s
user    175m14.706s
sys    8m56.598s
swsqy-6min.mkv

real    50m29.394s
user    165m20.706s
sys    8m30.871s
wjfy-6min.mkv

real    100m38.379s
user    364m8.060s
sys    18m27.765s
yhbxb-6min.mkv

real    48m45.559s
user    166m35.977s
sys    8m44.169s
select input_param,sharpness from Framework_quality_check where sharpness >0 and sharpness <1 and start_time between '2019-09-05' and '2019-11-12'  and input_param like '%zongyi%' and input_param like '%mkv%'

对ffmpeg gpu valgrind

==334586== 
==334586== HEAP SUMMARY:
==334586==     in use at exit: 2,101,652,496 bytes in 1,608,370 blocks
==334586==   total heap usage: 3,532,073 allocs, 1,923,703 frees, 4,400,394,734 bytes allocated
==334586== 
==334586== LEAK SUMMARY:
==334586==    definitely lost: 4,024 bytes in 35 blocks
==334586==    indirectly lost: 2,788 bytes in 33 blocks
==334586==      possibly lost: 6,427,112 bytes in 47,545 blocks
==334586==    still reachable: 2,095,218,572 bytes in 1,560,757 blocks
==334586==                       of which reachable via heuristic:
==334586==                         stdstring          : 404,523 bytes in 5,807 blocks
==334586==                         newarray           : 1,552 bytes in 17 blocks
==334586==         suppressed: 0 bytes in 0 blocks
==334586== Rerun with --leak-check=full to see details of leaked memory
==334586== 
==334586== For lists of detected and suppressed errors, rerun with: -s
==334586== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
yum install -y yum-utils centos-release-scl
yum-config-manager --enable rhel-server-rhscl-7-rpms
yum install -y devtoolset-7-gcc devtoolset-7-gcc-c++ devtoolset-7-gcc-gfortran devtoolset-7-binutils

说找不到包,改成

输出time命令

之前想输出time用如下命令无效:

time ls &> a.txt

后来发现以下有效

{ time ls; } &> a.txt

注意大括号的空格,与;号

sftp -o ServerAliveInterval=30 -P 7122 root,10.16.195.37,22,zhuyinghao@jumpbox.qiyi.domain

强制覆盖本地代码

git fetch --all
git reset --hard origin/master
git pull

ppc竞品ftp

lftp cloud_codec:test_codec@10.110.27.67/2019PPC

将一张图左右裁剪成两个部分

ls -1 *.png | sed 's,.*,& &,' | xargs -n 2 convert -crop 50%x100% +repage

先bilateral,后restore

./ffmpeg -i /data/zhuyinghao/qrestore-2.0.1-test-dataset/zaosheng_dianying_8_2_noise.mkv -vf "[in]bilateral=Diameter=9:sigmaColor=10:sigmaSpace=10[middle];[middle]qrestore2=model=torch_rgb:model_path=./qrestore_model/model_0422_190_G.pt:device=gpu:denoise=0[out]" -loglevel info -c:v libx264 -x264opts qp=12:min-keyint=25:keyint=128 zaosheng_dianying_8_2_bila_restore.mp4

先hqdn3d,后restore

./ffmpeg -i dianshiju_2_1_argus_0.33.mkv -vf "[in]hqdn3d=0:0:6:0[middle];[middle]qrestore2=model=torch_rgb:model_path=./qrestore_model/model_0422_190_G.pt:device=gpu:denoise=1:flat-weight=0.5:edge-weight=1[out]" -loglevel info -c:v libx264 -x264opts qp=12:min-keyint=25:keyint=128 dianshiju_2_1_argus_0.33_hqdn_denoise.mp4

Glad to hear you found something that works for you!

Since you got me down the rabbit hole of denoisers, I figured I would share my research results with anyone who’s interested.

ffmpeg has six denoisers built-in that I was able to find, which I’ve listed below along with their transcoding speeds on a 1080p source video using a four-core laptop computer. I wrote scripts that used a variety of settings with each denoiser to make sure I was seeing the best each one had to offer.

atadenoise (20 fps) - by averaging pixels across frames, it reduces contrast of noise areas to make them less obvious as opposed to using a specialized algorithm to smooth the noise away; this reduces overall image contrast; filter also darkens the overall output

dctdnoiz (1.6 fps) - creates beautiful detail on a still image, but randomizes the noise across frames so much that it actually makes the noise look worse during playback, plus it darkens the output

nlmeans (0.6 fps) - darkens the output, but sometimes has redeeming qualities (more on this later)

hqdn3d (21 fps) - color neutral which is good, but the output looks smeary to me where it loses a lot of fine detail in hair strands and wood grain

owdenoise (0.3 fps) - color neutral wavelet denoiser with stunningly good results on high-res sources

vaguedenoiser (7.6 fps) - another color neutral wavelet denoiser whose output looks identical to owdenoise, but its processing speed is 25x faster; tried every combination of threshold and nsteps, and found the default settings of 2/6 to consistently produce the closest-to-real-life results

I tested the denoisers on videos I took with my own mirrorless camera, meaning I remember what the scene looked like in real life. In one video, there happened to be a guy in a black business dress shirt made of silk or satin or something with a sheen to it, but the sheen wasn’t coming through due to the noise of the original footage. The wavelet-based denoisers were the only ones to remove and smooth the noise such that the fabric regained the smooth sheen you would expect from silk. To my eye, it bumped up the realism of the video an entire notch to see fabric actually look like fabric. The rest of the frame also dropped to zero dancing noise. It turned the video into a still photograph when nothing was moving. I didn’t realize until this experiment that even a tiny amount of dancing noise can seriously detract from the realism of a video, and that a sense of immersion can be restored by getting rid of it. Obviously, vaguedenoiser is my new weapon of choice.

So, about nlmeans… I found a radical difference between the ffmpeg version and the HandBrake version. I think HandBrake wins on every metric. nlmeans in ffmpeg actually makes video look worse (blockier) if the resolution is 1080p or above, or if the video comes from an excellent camera that has little noise to begin with. nlmeans in ffmpeg also can’t be used as a finishing step because it darkens the output, which destroys any color grading that happened before it. But I found two places where nlmeans in ffmpeg outshined the other ffmpeg denoisers: low-resolution video, and very-high-noise video. nlmeans does great at restoring a VHS capture, which I sense from the author’s web site was one of the original design goals. Secondly, in my tests, nlmeans did better than the other ffmpeg denoisers on high-resolution high-noise videos, which in my case meant a smartphone video in low light using digital zoom. Given these two specialized cases where nlmeans performed well, I could see a workflow where I used nlmeans to create denoised intermediates, then color graded the intermediates to fix the darkened output. Running nlmeans on a noisy source then adding it to the timeline and running vaguedenoiser on the total project did not cause any harm in my tests. But for best results, I think HandBrake is still the way to go where nlmeans is involved.

For my purposes, I think I will stick to vaguedenoiser because it’s beautiful on 1080p and 4K, and it is easily added to my existing ffmpeg filter chain when I do my finishing steps. I don’t have to create an intermediate to pass off to HandBrake this way. However, if I came across a particularly noisy source video, I would probably run it through HandBrake before adding it to my Shotcut project to get the same benefits Andrew noticed.

Good luck to everyone, whatever you use.

ffmpeg 报错:Too many packets buffered for output stream 0:1.

-max_muxing_queue_size 1024
http://www.jasonbowdach.com/blog/2015/12/5-tips-on-noise-reduction.html
./ffmpeg -i fengrenji_xiaopangzi.mp4 -vf qrestore2=model=torch_rgb:model_path=./qrestore_model/model_0422_190_G.pt:roi=1:roi_model_path=./qrestore_model/run-0-final_cpu.pt:device=gpu -loglevel info -c:v libx264 -x264opts qp=12:min-keyint=25:keyint=128 fengrenji_roi_bila.mp4

Masking the foreground and denoising the background is a good technique when the background is out of focus.

You would get a slightly better result using a dedicated noise removal plugin (noiseninja or neatimage, or one of the inbuilt Photoshop tools). Gaussian blur is a bit of a blunt instrument, whilst it's good at removing pure random noise it is not as good at removing banding, which is visible in this image. It's also bad at preserving hard edges. An edge aware noise filter would let you mask much closer to the edges of your in-focus subject without risk of blurring it, which in turn would avoid the halo of grain you have around the bottle.

Blending a little (25% or 33%) of the original noisy background back in helps the result look less fake, whilst still allowing a low overall noise level.

Finally when you have a noisy image, save using the highest quality option if using JPEG (or use PNG). JPEG's attempt to compress noise often looks worse than the noise itself!

换显卡型号,要重新编译libtorch:
(1)下载源代码,注意要加上--recursive,否则下载的代码不全,否则编译时报错
git clone --recursive https://github.com/pytorch/pytorch -b v1.3.0
cd pytorch

if you are updating an existing checkout

git submodule sync
git submodule update --init --recursive
(2)编译时可以限制编译线程数,默认是80,在cpu核较少时,可能会崩溃;
export MAX_JOBS=0;

编译完libtorch,再编译ffmoeg出现以下错误:

![图片说明](https://uploadfiles.nowcoder.com/images/20191224/1485076_1577169607044_D97933A30BCD3121F9FB7B2369B3780F "图片标题")
是因为编译libtorch是用的python, 用的/root/anaconda3/lib/libstdc++.so.6.0.26 , 但是因为openvino 的配置脚本把libstdc++.so连接到了openvino中,
![图片说明](https://uploadfiles.nowcoder.com/images/20191224/1485076_1577169899974_6A038368D0D3DFE8E5A984478419C87F "图片标题")
将/usr/lib64/libstdc++.so.6链接到 /root/anaconda3/lib/libstdc++.so.6.0.26就好了
编译过的ffmpeg运行时报
![图片说明](https://uploadfiles.nowcoder.com/images/20191224/1485076_1577170043001_389807982039E01161FF9F8FAFB92DEF "图片标题")
也是因为这个;
将/opt/vino_inference_sdk/tools/setupvar.sh中的这两行注释掉
![图片说明](https://uploadfiles.nowcoder.com/images/20191224/1485076_1577170117603_905A2F7C79AB40D53E34AC9717CDE8C1 "图片标题")

可见性:export CUDA_VISIBLE_DEVICES=""

CMakelist编写
cuda项目的CMakelist.txt

# 按惯例,cmake的版本
CMAKE_MINIMUM_REQUIRED(VERSION 2.8)
# 项目名称
PROJECT(AD-Census)
# cmake寻找cuda,这个要现在系统里面装好cuda,设置好cuda的环境参数啥的
FIND_PACKAGE(CUDA REQUIRED)
# C++和CUDA的编译参数,可选。
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
SET(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS};-gencode arch=compute_61,code=sm_61;-std=c++11;)
# 头文件路径,按需
INCLUDE_DIRECTORIES(
    ./containers)
# 库文件路径,按需
LINK_DIRECTORIES(/usr/lib
    /usr/local/lib)
# 主要就是这个,教cmake去找nvcc来编译这些东西
CUDA_ADD_EXECUTABLE(ad-census
    main.cu
    ./containers/device_memory.cpp
    ./containers/initialization.cpp
)
# 链接外部库,按需
TARGET_LINK_LIBRARIES(ad-census
    某个库的名字)

需要nvcc指定GPU计算能力?
![图片说明](https://uploadfiles.nowcoder.com/images/20191227/1485076_1577451302980_529BA98E0662D54199388AACEDC1BD59 "图片标题")

![图片说明](https://uploadfiles.nowcoder.com/images/20191227/1485076_1577451329321_953E93A3E48BDDBD4552B4D47FC4B973 "图片标题")

显存泄露检测

使用cuda自带工具cuda-memcheck

cuda-memcheck --leak-check full ./qrestore_denoise --model ../model_190_g_eval.pt --yuv ../jiaoyu.yuv --width 1920 --height 1080 --device GPU --model_type rgb >> memcheck-mem_guard_7.txt

但是要注意在程序退出前添加cudaDeviceReset(); 需要#include <cuda_runtime.h>

手动释放libtorch占用的显存

#include "torch/utils.h" 
#include <c10/cuda/CUDACachingAllocator.h>
c10::cuda::CUDACachingAllocator::emptyCache();

C10_CUDA_API DeviceStats getDeviceStats(int device);
![图片说明](https://uploadfiles.nowcoder.com/images/20200119/1485076_1579438950681_ABF58FFA9A8E6EAE4FCB2D957F228C11 "图片标题")
libtorch 1.3的接口

void display_c10_cuda_mem_stat(int32_t sleep_time) {
    printf("currentMemoryAllocated/[maxMemoryAllocated]: \t %0.1f/[%0.1f] MB\n ",
        c10::cuda::CUDACachingAllocator::currentMemoryAllocated(0) / 1024.0 / 1024.0,
        c10::cuda::CUDACachingAllocator::maxMemoryAllocated(0) / 1024.0 / 1024.0);
    printf("currentMemoryCached/[maxMemoryCached]: \t %0.1f/[%0.1f] MB\n",
        c10::cuda::CUDACachingAllocator::currentMemoryCached(0) / 1024.0 / 1024.0,
        c10::cuda::CUDACachingAllocator::maxMemoryCached(0) / 1024.0 / 1024.0);
    std::this_thread::sleep_for(std::chrono::milliseconds(1000*sleep_time));
}

ffmpeg编debug版和release版

--disable-optimizations

C中可以通过#include <stdio.h>和#include "stidio.h",区别是:

#include <stdio.h>,直接到系统指定目录去查找头文件。

include "stidio.h",会先到当前目录查找头文件,如果没找到在到系统指定目录查找。

gcc编译时查找头文件,按照以下路径顺序查找:

  1. gcc编译时,可以设置-I选项以指定头文件的搜索路径,如果指定多个路径,则按照顺序依次查找。比如,

gcc -I /usr/local/include/node a.c
2. gcc会查找环境变量C_INCLUDE_PATH,CPLUS_INCLUDE_PATH中指定的路径。

  1. 系统默认的路径,分别是/usr/include,/usr/local/include,/usr/lib/gcc-lib/i386-linux/2.95.2/include(gcc库文件的路径,各个系统不一致)。

同时,include也可以采用相对路径,比如,a.c需要包含/usr/local/include/node/v8.h,由于/usr/local/include是系统的默认搜索路径,所以在a.c中可以用相对路径包含,#include<node/v8.h>。
————————————————
版权声明:本文为CSDN博主「chosen0ne」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/chosen0ne/article/details/7210946

opencv头文件问题
The problem is that under the "include" directory for "opencv2" there seem to be tons of header files missing. The only header file there at the moment is "opencv.hpp", which includes a whole set of other files which aren't there. Does anyone have any idea where I can get these files from?

The header files of the modules are in their own directories. E.g., you can find calib3d.hpp in /modules/calib3d/include/opencv2/calib3d. The Makefile created by CMake knows these addresses, hence when you make install the header files are all copied into /usr/local/include/opencv2.

Ahh I had only issued a make command and needed to do a make install! Thanks! Do you know how I can set the install path to change it from /usr/local/include etc.? Thanks again!

call cmake -DCMAKE_INSTALL_PREFIX=/usr or any other directory you want instead of /usr.

#include <cuda_runtime.h>
cudaGetDeviceCount(&num_devices);
cudaSetDevice(cuda_device);
module.eval();
-i /data/zhuyinghao/qrs_2.0.1_online_1225/jiaoye/in/clip/new_clip/我就是演员之巅峰对决-20191221_clip_2.mkv -vf qrestore2=model=torch_rgb:model_path=./qrestore_model/model_0422_190_G.pt:device=gpu:denoise=0 -loglevel info -c:v libx264 -x264opts qp=12:min-keyint=25:keyint=128
jarvis create-runonce-job "runonce-name" --volume-name "jiezhen_sx" --volume-token "375add" --cpu-quota 24 --gpu-quota 2 --mems-in-mb 64000 --cluster-name "runonce-online01-gpu-training" --group "yinghao"
int main(int argc, char** argv)
{
    try
    {
        Args args;
        if (argc < 2)
        {
            printHelp();
            args.camera_id = 0;
            args.src_is_camera = true;
        }
        else
        {
            args = Args::read(argc, argv);
            if (help_showed)
                return -1;
        }
        App app(args);
        app.run();
    }
    catch (const Exception& e) { return cout << "error: "  << e.what() << endl, 1; }
    catch (const exception& e) { return cout << "error: "  << e.what() << endl, 1; }
    catch(...) { return cout << "unknown exception" << endl, 1; }
    return 0;
}

0.6987
0.6810
0.7456
wopa0.7356
qing0.7772

./ffmpeg -i E-003-jiaoyu_pianduan-5s.mp4 -vf roibila=roi_model_path=./qrestore_model/run-0-final_cpu.pt:device=gpu -y a.mp4

好进化机器

export QB=root,10.57.211.144,22;ssh haojinhua_sx@jumpbox.qiyi.domain -o SendEnv=QB
(initScaleNets_filter): ModuleList(
    (0): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU()
    (2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU()
    (4): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU()
    (7): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
    (8): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (9): ReLU()
    (10): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
    (11): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (12): ReLU()
    (13): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
    (14): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU()
    (16): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU()
    (19): Upsample(scale_factor=2.0, mode=bilinear)
    (20): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (21): ReLU()
    (22): Upsample(scale_factor=2.0, mode=bilinear)
    (23): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (24): ReLU()
    (25): Upsample(scale_factor=2.0, mode=bilinear)
    (26): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU()
    (28): Upsample(scale_factor=2.0, mode=bilinear)
    (29): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (30): ReLU()
    (31): Upsample(scale_factor=2.0, mode=bilinear)
    (32): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (33): ReLU()
  )
  (initScaleNets_filter1): ModuleList(
    (0): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU()
    (2): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  )
  (initScaleNets_filter2): ModuleList(
    (0): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU()
    (2): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  )
  (flownets): PWCDCNet(
    (conv1a): Sequential(
      (0): Conv2d(3, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv1aa): Sequential(
      (0): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv1b): Sequential(
      (0): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv2a): Sequential(
      (0): Conv2d(16, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv2aa): Sequential(
      (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv2b): Sequential(
      (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv3a): Sequential(
      (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv3aa): Sequential(
      (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv3b): Sequential(
      (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv4a): Sequential(
      (0): Conv2d(64, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv4aa): Sequential(
      (0): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv4b): Sequential(
      (0): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv5a): Sequential(
      (0): Conv2d(96, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv5aa): Sequential(
      (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv5b): Sequential(
      (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv6aa): Sequential(
      (0): Conv2d(128, 196, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv6a): Sequential(
      (0): Conv2d(196, 196, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv6b): Sequential(
      (0): Conv2d(196, 196, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (corr): Correlation()
    (leakyRELU): LeakyReLU(negative_slope=0.1)
    (conv6_0): Sequential(
      (0): Conv2d(81, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv6_1): Sequential(
      (0): Conv2d(209, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv6_2): Sequential(
      (0): Conv2d(337, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv6_3): Sequential(
      (0): Conv2d(433, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv6_4): Sequential(
      (0): Conv2d(497, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (predict_flow6): Conv2d(529, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (deconv6): ConvTranspose2d(2, 2, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
    (upfeat6): ConvTranspose2d(529, 2, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
    (conv5_0): Sequential(
      (0): Conv2d(213, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv5_1): Sequential(
      (0): Conv2d(341, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv5_2): Sequential(
      (0): Conv2d(469, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv5_3): Sequential(
      (0): Conv2d(565, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv5_4): Sequential(
      (0): Conv2d(629, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (predict_flow5): Conv2d(661, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (deconv5): ConvTranspose2d(2, 2, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
    (upfeat5): ConvTranspose2d(661, 2, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
    (conv4_0): Sequential(
      (0): Conv2d(181, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv4_1): Sequential(
      (0): Conv2d(309, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv4_2): Sequential(
      (0): Conv2d(437, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv4_3): Sequential(
      (0): Conv2d(533, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv4_4): Sequential(
      (0): Conv2d(597, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (predict_flow4): Conv2d(629, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (deconv4): ConvTranspose2d(2, 2, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
    (upfeat4): ConvTranspose2d(629, 2, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
    (conv3_0): Sequential(
      (0): Conv2d(149, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv3_1): Sequential(
      (0): Conv2d(277, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv3_2): Sequential(
      (0): Conv2d(405, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv3_3): Sequential(
      (0): Conv2d(501, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv3_4): Sequential(
      (0): Conv2d(565, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (predict_flow3): Conv2d(597, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (deconv3): ConvTranspose2d(2, 2, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
    (upfeat3): ConvTranspose2d(597, 2, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
    (conv2_0): Sequential(
      (0): Conv2d(117, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv2_1): Sequential(
      (0): Conv2d(245, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv2_2): Sequential(
      (0): Conv2d(373, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv2_3): Sequential(
      (0): Conv2d(469, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (conv2_4): Sequential(
      (0): Conv2d(533, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (predict_flow2): Conv2d(565, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (deconv2): ConvTranspose2d(2, 2, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
    (dc_conv1): Sequential(
      (0): Conv2d(565, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (dc_conv2): Sequential(
      (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (dc_conv3): Sequential(
      (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (dc_conv4): Sequential(
      (0): Conv2d(128, 96, kernel_size=(3, 3), stride=(1, 1), padding=(8, 8), dilation=(8, 8))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (dc_conv5): Sequential(
      (0): Conv2d(96, 64, kernel_size=(3, 3), stride=(1, 1), padding=(16, 16), dilation=(16, 16))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (dc_conv6): Sequential(
      (0): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): LeakyReLU(negative_slope=0.1)
    )
    (dc_conv7): Conv2d(32, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  )
  (depthNet): Sequential(
    (0): Conv2d(3, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
    (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU()
    (3): Sequential(
      (0): LambdaMap(
        (0): Sequential(
          (0): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
          (1): LambdaReduce(
            (0): Sequential(
              (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
            )
            (1): Sequential(
              (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
              (3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
              (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (5): ReLU()
            )
            (2): Sequential(
              (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
              (3): Conv2d(32, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
              (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (5): ReLU()
            )
            (3): Sequential(
              (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
              (3): Conv2d(32, 32, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
              (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (5): ReLU()
            )
          )
          (2): LambdaReduce(
            (0): Sequential(
              (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
            )
            (1): Sequential(
              (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
              (3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
              (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (5): ReLU()
            )
            (2): Sequential(
              (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
              (3): Conv2d(32, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
              (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (5): ReLU()
            )
            (3): Sequential(
              (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
              (3): Conv2d(32, 32, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
              (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (5): ReLU()
            )
          )
          (3): Sequential(
            (0): LambdaMap(
              (0): Sequential(
                (0): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
                (1): LambdaReduce(
                  (0): Sequential(
                    (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                  )
                  (1): Sequential(
                    (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                    (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                  (2): Sequential(
                    (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(32, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
                    (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                  (3): Sequential(
                    (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(32, 32, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                    (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                )
                (2): LambdaReduce(
                  (0): Sequential(
                    (0): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                  )
                  (1): Sequential(
                    (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                    (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                  (2): Sequential(
                    (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
                    (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                  (3): Sequential(
                    (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(32, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                    (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                )
                (3): Sequential(
                  (0): LambdaMap(
                    (0): Sequential(
                      (0): LambdaReduce(
                        (0): Sequential(
                          (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                        )
                        (1): Sequential(
                          (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                        (2): Sequential(
                          (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                        (3): Sequential(
                          (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(32, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                      )
                      (1): LambdaReduce(
                        (0): Sequential(
                          (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                        )
                        (1): Sequential(
                          (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                        (2): Sequential(
                          (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(64, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                        (3): Sequential(
                          (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(64, 64, kernel_size=(11, 11), stride=(1, 1), padding=(5, 5))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                      )
                    )
                    (1): Sequential(
                      (0): AvgPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0)
                      (1): LambdaReduce(
                        (0): Sequential(
                          (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                        )
                        (1): Sequential(
                          (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                        (2): Sequential(
                          (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                        (3): Sequential(
                          (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(32, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                      )
                      (2): LambdaReduce(
                        (0): Sequential(
                          (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                        )
                        (1): Sequential(
                          (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                        (2): Sequential(
                          (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                        (3): Sequential(
                          (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(32, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                      )
                      (3): Sequential(
                        (0): LambdaMap(
                          (0): Sequential(
                            (0): LambdaReduce(
                              (0): Sequential(
                                (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                              )
                              (1): Sequential(
                                (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                                (3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                                (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (5): ReLU()
                              )
                              (2): Sequential(
                                (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                                (3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
                                (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (5): ReLU()
                              )
                              (3): Sequential(
                                (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                                (3): Conv2d(32, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                                (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (5): ReLU()
                              )
                            )
                            (1): LambdaReduce(
                              (0): Sequential(
                                (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                              )
                              (1): Sequential(
                                (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                                (3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                                (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (5): ReLU()
                              )
                              (2): Sequential(
                                (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                                (3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
                                (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (5): ReLU()
                              )
                              (3): Sequential(
                                (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                                (3): Conv2d(32, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                                (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (5): ReLU()
                              )
                            )
                          )
                          (1): Sequential(
                            (0): AvgPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0)
                            (1): LambdaReduce(
                              (0): Sequential(
                                (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                              )
                              (1): Sequential(
                                (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                                (3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                                (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (5): ReLU()
                              )
                              (2): Sequential(
                                (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                                (3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
                                (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (5): ReLU()
                              )
                              (3): Sequential(
                                (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                                (3): Conv2d(32, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                                (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (5): ReLU()
                              )
                            )
                            (2): LambdaReduce(
                              (0): Sequential(
                                (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                              )
                              (1): Sequential(
                                (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                                (3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                                (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (5): ReLU()
                              )
                              (2): Sequential(
                                (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                                (3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
                                (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (5): ReLU()
                              )
                              (3): Sequential(
                                (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                                (3): Conv2d(32, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                                (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (5): ReLU()
                              )
                            )
                            (3): LambdaReduce(
                              (0): Sequential(
                                (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                              )
                              (1): Sequential(
                                (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                                (3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                                (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (5): ReLU()
                              )
                              (2): Sequential(
                                (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                                (3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
                                (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (5): ReLU()
                              )
                              (3): Sequential(
                                (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                                (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (2): ReLU()
                                (3): Conv2d(32, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                                (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                                (5): ReLU()
                              )
                            )
                            (4): UpsamplingNearest2d(scale_factor=2.0, mode=nearest)
                          )
                        )
                        (1): LambdaReduce()
                      )
                      (4): LambdaReduce(
                        (0): Sequential(
                          (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                        )
                        (1): Sequential(
                          (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                        (2): Sequential(
                          (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                        (3): Sequential(
                          (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(32, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                      )
                      (5): LambdaReduce(
                        (0): Sequential(
                          (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                        )
                        (1): Sequential(
                          (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                        (2): Sequential(
                          (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(64, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                        (3): Sequential(
                          (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (2): ReLU()
                          (3): Conv2d(64, 64, kernel_size=(11, 11), stride=(1, 1), padding=(5, 5))
                          (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                          (5): ReLU()
                        )
                      )
                      (6): UpsamplingNearest2d(scale_factor=2.0, mode=nearest)
                    )
                  )
                  (1): LambdaReduce()
                )
                (4): LambdaReduce(
                  (0): Sequential(
                    (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                  )
                  (1): Sequential(
                    (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                    (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                  (2): Sequential(
                    (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
                    (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                  (3): Sequential(
                    (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(32, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                    (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                )
                (5): LambdaReduce(
                  (0): Sequential(
                    (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                  )
                  (1): Sequential(
                    (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                    (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                  (2): Sequential(
                    (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(32, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
                    (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                  (3): Sequential(
                    (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(32, 32, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                    (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                )
                (6): UpsamplingNearest2d(scale_factor=2.0, mode=nearest)
              )
              (1): Sequential(
                (0): LambdaReduce(
                  (0): Sequential(
                    (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                  )
                  (1): Sequential(
                    (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                    (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                  (2): Sequential(
                    (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(32, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
                    (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                  (3): Sequential(
                    (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(32, 32, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                    (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                )
                (1): LambdaReduce(
                  (0): Sequential(
                    (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                  )
                  (1): Sequential(
                    (0): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                    (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                  (2): Sequential(
                    (0): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(64, 32, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
                    (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                  (3): Sequential(
                    (0): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))
                    (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (2): ReLU()
                    (3): Conv2d(64, 32, kernel_size=(11, 11), stride=(1, 1), padding=(5, 5))
                    (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
                    (5): ReLU()
                  )
                )
              )
            )
            (1): LambdaReduce()
          )
          (4): LambdaReduce(
            (0): Sequential(
              (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
            )
            (1): Sequential(
              (0): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
              (3): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
              (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (5): ReLU()
            )
            (2): Sequential(
              (0): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
              (3): Conv2d(64, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
              (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (5): ReLU()
            )
            (3): Sequential(
              (0): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
              (3): Conv2d(64, 32, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
              (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (5): ReLU()
            )
          )
          (5): LambdaReduce(
            (0): Sequential(
              (0): Conv2d(128, 16, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
            )
            (1): Sequential(
              (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
              (3): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
              (4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (5): ReLU()
            )
            (2): Sequential(
              (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
              (3): Conv2d(32, 16, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
              (4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (5): ReLU()
            )
            (3): Sequential(
              (0): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
              (3): Conv2d(32, 16, kernel_size=(11, 11), stride=(1, 1), padding=(5, 5))
              (4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (5): ReLU()
            )
          )
          (6): UpsamplingNearest2d(scale_factor=2.0, mode=nearest)
        )
        (1): Sequential(
          (0): LambdaReduce(
            (0): Sequential(
              (0): Conv2d(128, 16, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
            )
            (1): Sequential(
              (0): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
              (3): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
              (4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (5): ReLU()
            )
            (2): Sequential(
              (0): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
              (3): Conv2d(64, 16, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
              (4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (5): ReLU()
            )
            (3): Sequential(
              (0): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))
              (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (2): ReLU()
              (3): Conv2d(64, 16, kernel_size=(11, 11), stride=(1, 1), padding=(5, 5))
              (4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
              (5): ReLU()
            )
          )
        )
      )
      (1): LambdaReduce()
    )
    (4): Conv2d(64, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  )
))
pip install scikit-image -i http://jfrog.cloud.qiyi.domain/api/pypi/pypi/simple
Looking in indexes: http://jfrog.cloud.qiyi.domain/api/pypi/pypi/simple
for file in `ls /data/zhuyinghao/zoomai/25_to_120/`;do ffmpeg -i /data/zhuyinghao/zoomai/zoomai_120fps/${file%%_*}_120fps.mp4 -i /data/zhuyinghao/zoomai/25_to_120/$file -filter_complex hstack=inputs=2 -vcodec libx264 -x264opts qp=12:bframes=3 -color_primaries 1 -color_trc 1 -colorspace 1 -y ./compare/${file%%.*}_zoom_vs_our_120fps.mp4;done

convert png转换raw

convert -depth 8 xxx.png rgb:xxx.raw 
==132134== Profiling result:
            Type  Time(%)      Time     Calls       Avg       Min       Max  Name
 GPU activities:   47.44%  9.9232ms         1  9.9232ms  9.9232ms  9.9232ms  scharr3x3(uchar3*, unsigned char*, short, short, short, short, unsigned int)
                   33.48%  7.0028ms         2  3.5014ms  3.4969ms  3.5059ms  morph(unsigned char*, unsigned char*, int, int, int, int, int, bool)
                    9.56%  2.0005ms         1  2.0005ms  2.0005ms  2.0005ms  block_label(unsigned int*, unsigned char*)
                    3.66%  766.24us         8  95.779us     768ns  760.41us  [CUDA memcpy HtoD]
                    1.48%  308.75us         1  308.75us  308.75us  308.75us  resolve_labels(unsigned int*)
                    1.27%  266.25us         1  266.25us  266.25us  266.25us  y_label_reduction(unsigned int*, unsigned char*)
                    1.11%  232.33us         1  232.33us  232.33us  232.33us  hist(unsigned int*, unsigned int*, int)
                    0.79%  165.45us         1  165.45us  165.45us  165.45us  reducedot(unsigned char*, unsigned int*, unsigned int*, int)
                    0.77%  161.80us         1  161.80us  161.80us  161.80us  [CUDA memcpy DtoH]
                    0.42%  88.035us         1  88.035us  88.035us  88.035us  x_label_reduction(unsigned int*, unsigned char*)
                    0.00%     736ns         1     736ns     736ns     736ns  [CUDA memset]
      API calls:   92.35%  287.31ms         4  71.827ms  248.97us  286.55ms  cudaMalloc
                    6.76%  21.047ms         2  10.524ms  882.63us  20.165ms  cudaMemcpy
                    0.39%  1.2074ms         1  1.2074ms  1.2074ms  1.2074ms  cuDeviceTotalMem
                    0.35%  1.0967ms        96  11.424us     165ns  511.02us  cuDeviceGetAttribute
                    0.04%  121.71us         9  13.523us  7.3820us  37.743us  cudaLaunchKernel
                    0.04%  114.73us         7  16.389us  6.9070us  70.432us  cudaMemcpyToSymbol
                    0.03%  104.82us         1  104.82us  104.82us  104.82us  cuDeviceGetName
                    0.02%  71.966us         1  71.966us  71.966us  71.966us  cudaMemsetAsync
                    0.01%  31.316us         3  10.438us  2.6450us  26.019us  cudaStreamCreate
                    0.00%  9.7070us         1  9.7070us  9.7070us  9.7070us  cudaStreamSynchronize
                    0.00%  4.8730us         1  4.8730us  4.8730us  4.8730us  cuDeviceGetPCIBusId
                    0.00%  2.6550us         3     885ns     292ns  1.8670us  cuDeviceGetCount
                    0.00%  1.4150us         2     707ns     249ns  1.1660us  cuDeviceGet
                    0.00%     301ns         1     301ns     301ns     301ns  cuDeviceGetUuid
```curl -X PUT -T test_dain_small.py -H "X-Auth-Token: dad42a3b66aa40c5962d35540d9ab052" https://fft.qiyi.domain/api/file/4k_3d_60fps/test_dain_small.py
for file in `ls ./`;do echo $file;curl -X PUT -T $file -H "X-Auth-Token: dad42a3b66aa40c5962d35540d9ab052" http://fft.qiyi.domain/api/file/4k_3d_fruc/24fps/$file;done
jarvis create-runonce-job "ubuntu" --volume-name "zhuyinghao-zhuyinghao" --volume-token "392a0dac0ea14a71b073242870438614" --cpu-quota 12 --gpu-quota 1 --mems-in-mb 32000 --cluster-name "runonce-online03-gpu-training" --group "yinghao"
i=0;cat url | while read line;do ffmpeg -i "$line" -ss 00:05:00 -t 120 -acodec copy -vcodec copy -y ./3d/3d_$(printf %04d $i)_clip_2min.mkv;i=$((i+1));done

将文件夹中的图片转换成视频

ffmpeg -loop 1 -f image2 -i ./%04d.png -vcodec libx264 -x264opts qp=12:bframes=3 -r 1 -frames 14 test_sdyjq.mp4
ffmpeg -pix_fmt gbrp16le -s 512x288 -framerate 25.0 -f rawvideo -i /data/dataset/gbr16_video.rgb -c:v libx265 -x265-params colorprim='bt2020':transfer='smpte-st-2084':colormatrix='bt2020nc' -preset slow -pix_fmt yuv420p10le /data/dataset/hdr10_test_video.mkv
ffmpeg -pix_fmt gbrp16le -s 1920x1080 -framerate 25.0 -f rawvideo -i src-tenggongxueyuan-sdr-4-014633-30s.gbr -c:v libx265 -preset veryslow -x265-params lossless -pix_fmt yuv420p10le src-tenggongxueyuan-sdr-4-014633-30s.mp4
ffmpeg -pix_fmt gbrp16le -s 1920x1080 -framerate 25.0 -f rawvideo -i src-tenggongxueyuan-sdr-1-001542-30s_enhanced_pq2100_bt2020_16bit.gbr -c:v libx265 -x265-params "hrd=1:aud=1:no-info=1:sar='1:1':colorprim='bt2020':transfer='smpte2084':colormatrix='bt2020nc':master-display='G(8500,39850)B(6500,2300)R(35400,14600)WP(15635,16450)L(0,0)':max-cll='0,0':no-open-gop=1" -b:v 10000k -preset slow -pix_fmt yuv420p10le src-tenggongxueyuan-sdr-1-001542-30s_enhanced_pq2100_bt2020_10M_10bit.mp4
ffmpeg -i 流浪地球.mkv -vf crop=512:512:512:512 -threads 5 -preset ultrafast -strict -2  outputname.mp4
CARAFE: Content-Aware ReAssembly of FEatures
ffmpeg -i shuangzishashou_all.mp4 -vf "movie=logo.png[logo];[in][logo]overlay=x='if(gte(t,2),-w+(mod(t,40)-2)*40,NAN)':y=(main_h-overlay_h)/2 [out]" -t 50 output.mp4
ffmpeg -i ./zongyi_zoomai/vlog营业中11200919800_SR_20200423.mkv -i edvr_M_tsa/vlog营业中11200919800_restore_by_223300_G_l1_and_perceptrul_qrestore.mp4 -filter_complex hstack=inputs=2 -t 10 -acodec copy -vcodec libx264 -x264opts qp=12:bframes=3  compare-vlog.mp4

rgb2gray

# rgb_weights = [0.3, 0.59, 0.11]
    # roi_crop_gray = np.dot(roi_crop[...,:3], rgb_weights)
nvcc -w -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_72,code=sm_72 -gencode=arch=compute_75,code=sm_75 -lib kernel.cu -o libfruckernel.a
g++ main.o -o main  -L/usr/local/cuda/lib64 -L/data/zhuyinghao/TensorRT-7.0.0.11/lib -lmyelin -lcublas -lnvrtc -lnvinfer -lcudart -lnvparsers -lnvonnxparser -L. -lfruckernel
全部评论

相关推荐

10-28 11:04
已编辑
美团_后端实习生(实习员工)
一个2人:我说几个点吧,你的实习经历写的让人觉得毫无含金量,你没有挖掘你需求里的 亮点, 让人觉得你不仅打杂还摆烂。然后你的简历太长了🤣你这个实习经历看完,估计没几个人愿意接着看下去, sdk, 索引这种东西单拎出来说太顶真了兄弟,好好优化下简历吧
点赞 评论 收藏
分享
10-15 15:00
潍坊学院 golang
跨考小白:这又不是官方
投递拼多多集团-PDD等公司10个岗位
点赞 评论 收藏
分享
点赞 收藏 评论
分享
牛客网
牛客企业服务