Pinned Repositories
LVLM-Safety
[FCS'24] LVLM Safety paper
Multimode-fusion-method-under-multi-task
A novel fusion approach termed Attentive Tensor Alignment
ProLLM
[COLM'24] We propose Protein Chain of Thought (ProCoT), which replicates the biological mechanism of signaling pathways as language prompts. It considers a signaling pathway as a protein reasoning process, which starts from upstream proteins and passes through several intermediate proteins to transmit biological signals to downstream proteins.
SEAttnGAN
Mingyu.Jin's final year project
Simulating-Alien-Civilizations-with-LLM-based-Agents
Alien Agent
Stockagent
Large Language Model-based Stock Trading in Simulated Real-world Environments
The-Impact-of-Reasoning-Step-Length-on-Large-Language-Models
[ACL'24] Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models (LLMs). However, the correlation between the effectiveness of CoT and the length of reasoning steps in prompts remains largely unknown. To shed light on this, we have conducted several empirical experiments to explore the relations.
VAEGAN_for_GZSL_based_on_Mahalanobis_distance
VAEGAN, I Love u
We_media_generation
Intelligent Portrait Cropping Based on Heat Map Keyword extraction based on part-of-speech Large model cue engineering based on BERT fine-tune。
uncertainty_attack
MingyuJ666's Repositories
MingyuJ666/ProLLM
[COLM'24] We propose Protein Chain of Thought (ProCoT), which replicates the biological mechanism of signaling pathways as language prompts. It considers a signaling pathway as a protein reasoning process, which starts from upstream proteins and passes through several intermediate proteins to transmit biological signals to downstream proteins.
MingyuJ666/Stockagent
Large Language Model-based Stock Trading in Simulated Real-world Environments
MingyuJ666/SEAttnGAN
Mingyu.Jin's final year project
MingyuJ666/The-Impact-of-Reasoning-Step-Length-on-Large-Language-Models
[ACL'24] Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models (LLMs). However, the correlation between the effectiveness of CoT and the length of reasoning steps in prompts remains largely unknown. To shed light on this, we have conducted several empirical experiments to explore the relations.
MingyuJ666/VAEGAN_for_GZSL_based_on_Mahalanobis_distance
VAEGAN, I Love u
MingyuJ666/LVLM-Safety
[FCS'24] LVLM Safety paper
MingyuJ666/We_media_generation
Intelligent Portrait Cropping Based on Heat Map Keyword extraction based on part-of-speech Large model cue engineering based on BERT fine-tune。
MingyuJ666/Multimode-fusion-method-under-multi-task
A novel fusion approach termed Attentive Tensor Alignment
MingyuJ666/Simulating-Alien-Civilizations-with-LLM-based-Agents
Alien Agent
MingyuJ666/MingyuJ666.github.io
https://mingyuj666.github.io/