使用pytorch hook可视化cnn
利用pytorch中的钩子(hook),我们不用改变输入输出中间的网络结构,可以方便的获取、改变网络中间层变量的值和梯度。本篇文章记录了如何利用register_forward_hook(hook)获取pre-trained模型任意层的输出特征图并保存。
register_forward_hook(hook)说明
Registers a forward hook on the module. The hook will be called every time after forward() has computed an output. It should have the following signature: hook(module, input, output) -> None The hook should not modify the input or output. Returns a handle that can be used to remove the added hook by calling handle.remove() Return type torch.utils.hooks.RemovableHandle
code
from torchvision import transforms import torchvision.models as models import matplotlib.pyplot as plt from PIL import Image import numpy as np path = 'path/to/your/image' device = 'cuda:0' activation = {} loader = transforms.Compose([transforms.ToTensor()]) # 获取某一层的activation def get_activation(name): def hook(model, input, output): activation[name] = output.detach() return hook # 图像的获取和预处理 def image_loader(image_name): image = Image.open(image_name).convert('RGB').resize((224, 224)) image = loader(image).unsqueeze(0) image = image.float().div(255.) return image.to(device) which_layer_to_visualize = 6 # 选择需要可视化的层 tensor = image_loader(path) v = models.vgg19(pretrained=True) v.cuda() vgg_pretrained_features = v.features handle = vgg_pretrained_features[which_layer_to_visualize].register_forward_hook(get_activation(str(which_layer_to_visualize))) out = vgg_pretrained_features(tensor) handle.remove() act = activation[str(which_layer_to_visualize)].squeeze().cpu() act = (act*255).numpy().astype(np.uint8) # 保存得到的特征图 for i in range(act.shape[0]): plt.imsave('./vgg_19_layer_{}_{}.png'.format(str(which_layer_to_visualize), str(i)), act[i], cmap = plt.cm.jet)
参考文章:
https://www.cnblogs.com/hellcat/p/8512090.html
https://zhuanlan.zhihu.com/p/75054200
https://discuss.pytorch.org/t/visualize-feature-map/29597/7