Pinned Repositories
CEM
EMNLP'22, CEM improves MHCH performance by correcting prediction bias and training an auxiliary cost simulator based on user state and labor cost causal graph, without requiring complex model crafting.
LSAS
ICME'23, Lightweight sub-attention strategy (LSAS) utilizes high-order sub-attention modules to improve the original self-attention modules.
Mirror-Gradient
WWW'24, Mirror Gradient (MG) makes multimodal recommendation models approach flat local minima easier compared to models with normal training.
SEM
SEM can automatically decide to select and integrate attention operators to compute attention maps.
SPEM
MMM'23, SPEM adopts a self-adaptive pooling strategy based on global max-pooling, global min-pooling and a lightweight module for producing the attention map.
SUR-adapter
ACM MM'23 (oral), SUR-adapter for pre-trained diffusion models can acquire the powerful semantic understanding and reasoning capabilities from large language models to build a high-quality textual semantic representation for text-to-image generation.
Qrange-group's Repositories
Qrange-group/SUR-adapter
ACM MM'23 (oral), SUR-adapter for pre-trained diffusion models can acquire the powerful semantic understanding and reasoning capabilities from large language models to build a high-quality textual semantic representation for text-to-image generation.
Qrange-group/CEM
EMNLP'22, CEM improves MHCH performance by correcting prediction bias and training an auxiliary cost simulator based on user state and labor cost causal graph, without requiring complex model crafting.
Qrange-group/Mirror-Gradient
WWW'24, Mirror Gradient (MG) makes multimodal recommendation models approach flat local minima easier compared to models with normal training.
Qrange-group/SEM
SEM can automatically decide to select and integrate attention operators to compute attention maps.
Qrange-group/LSAS
ICME'23, Lightweight sub-attention strategy (LSAS) utilizes high-order sub-attention modules to improve the original self-attention modules.
Qrange-group/SPEM
MMM'23, SPEM adopts a self-adaptive pooling strategy based on global max-pooling, global min-pooling and a lightweight module for producing the attention map.