Deep Matching:召回策略回顾
Introduction
这篇记录的是推荐系统的召回策略。转载请注明。
reference:WWW18 tutorial
Methods of Representation learning
DSSM
问题:序列的顺序信息丢失,词袋模型不能保证顺序信息
CDSSM,LSTM-DSSM
CNN和RNN都可以维持一些顺序信息。CNN可以维持短期信息,RNN可以维持长期信息。
CNTN
Extension to Representation learning methods
问题:之前的表示学习做文本匹配起来都太过粗糙
解决办法:add-fine-grained signals, include MultGranCNN, U-RAE, MV-LSTM等类似
MV-LSTM
U-RAE
Comparision
Methods of Matching function learning
ARC-II
Problem:
Word level exact matching signals are lost. 2-D matching matrix is constructed based on the embedding of the words in two N-grams.
Matching Pyramid
Inspired by image recognition
Basic matching signals: word-level matching matrix
Matching function:2D-convolution + MLP
Positions information of words is kept
Matching-SRNN
Recursive matching structure
K-NRM
kernel pooling as matching function
Conv-KNRM
Decomposable Attention Model for Matching
Some modern recommendation system architectures
Matrix Factorization as a Neural Network
Deep Matrix Factorization
AutoRec
Deep Collaborative Filtering via Marginalized DAE
Neural Collaborate filtering
NeuMF
NNCF
TransRec
Latent Relation Metric Learning
Wide & Deep
太过熟悉,不提了
Deep Crossing
NFM
太过熟悉,不提了
AFM
太过熟悉,不提了
TreeBased Model
GB-CENT
Deep Embedding Forest
Tree-enhanced Embedding Model
Short Summary
个人觉得其实那么多算法无非就干了两件事情,学习representation和学习matching function。路漫漫其修远兮,还是要多看代码多实现。
算法小屋 文章被收录于专栏
不定期分享各类算法以及面经。同时也正在学习相关分布式技术。欢迎一起交流。