Alert button
Picture for Yinheng Li

Yinheng Li

Alert button

A Practical Survey on Zero-shot Prompt Design for In-context Learning

Sep 22, 2023
Yinheng Li

The remarkable advancements in large language models (LLMs) have brought about significant improvements in Natural Language Processing(NLP) tasks. This paper presents a comprehensive review of in-context learning techniques, focusing on different types of prompts, including discrete, continuous, few-shot, and zero-shot, and their impact on LLM performance. We explore various approaches to prompt design, such as manual design, optimization algorithms, and evaluation methods, to optimize LLM performance across diverse tasks. Our review covers key research studies in prompt engineering, discussing their methodologies and contributions to the field. We also delve into the challenges faced in evaluating prompt performance, given the absence of a single "best" prompt and the importance of considering multiple metrics. In conclusion, the paper highlights the critical role of prompt design in harnessing the full potential of LLMs and provides insights into the combination of manual design, optimization techniques, and rigorous evaluation for more effective and efficient use of LLMs in various NLP tasks.

* RANLP 2023: 14th Conf. Recent Advances in NLP, pp. 637 to 643, Varna, Bulgaria  
* Published in International Conference Recent Advances in Natural Language Processing (RANLP2023) https://ranlp.org/ranlp2023/index.php/accepted-papers/ 
Viaarxiv icon

Utilizing Large Language Models for Natural Interface to Pharmacology Databases

Jul 26, 2023
Hong Lu, Chuan Li, Yinheng Li, Jie Zhao

Figure 1 for Utilizing Large Language Models for Natural Interface to Pharmacology Databases
Figure 2 for Utilizing Large Language Models for Natural Interface to Pharmacology Databases

The drug development process necessitates that pharmacologists undertake various tasks, such as reviewing literature, formulating hypotheses, designing experiments, and interpreting results. Each stage requires accessing and querying vast amounts of information. In this abstract, we introduce a Large Language Model (LLM)-based Natural Language Interface designed to interact with structured information stored in databases. Our experiments demonstrate the feasibility and effectiveness of the proposed framework. This framework can generalize to query a wide range of pharmaceutical data and knowledge bases.

* BIOKDD 2023 abstract track 
Viaarxiv icon

Self-Supervised Image Representation Learning: Transcending Masking with Paired Image Overlay

Jan 23, 2023
Yinheng Li, Han Ding, Shaofei Wang

Figure 1 for Self-Supervised Image Representation Learning: Transcending Masking with Paired Image Overlay
Figure 2 for Self-Supervised Image Representation Learning: Transcending Masking with Paired Image Overlay
Figure 3 for Self-Supervised Image Representation Learning: Transcending Masking with Paired Image Overlay
Figure 4 for Self-Supervised Image Representation Learning: Transcending Masking with Paired Image Overlay

Self-supervised learning has become a popular approach in recent years for its ability to learn meaningful representations without the need for data annotation. This paper proposes a novel image augmentation technique, overlaying images, which has not been widely applied in self-supervised learning. This method is designed to provide better guidance for the model to understand underlying information, resulting in more useful representations. The proposed method is evaluated using contrastive learning, a widely used self-supervised learning method that has shown solid performance in downstream tasks. The results demonstrate the effectiveness of the proposed augmentation technique in improving the performance of self-supervised models.

Viaarxiv icon

Dynamic Portfolio Management with Reinforcement Learning

Nov 26, 2019
Junhao Wang, Yinheng Li, Yijie Cao

Figure 1 for Dynamic Portfolio Management with Reinforcement Learning
Figure 2 for Dynamic Portfolio Management with Reinforcement Learning
Figure 3 for Dynamic Portfolio Management with Reinforcement Learning
Figure 4 for Dynamic Portfolio Management with Reinforcement Learning

Dynamic Portfolio Management is a domain that concerns the continuous redistribution of assets within a portfolio to maximize the total return in a given period of time. With the recent advancement in machine learning and artificial intelligence, many efforts have been put in designing and discovering efficient algorithmic ways to manage the portfolio. This paper presents two different reinforcement learning agents, policy gradient actor-critic and evolution strategy. The performance of the two agents is compared during backtesting. We also discuss the problem set up from state space design, to state value function approximator and policy control design. We include the short position to give the agent more flexibility during assets redistribution and a constant trading cost of 0.25%. The agent is able to achieve 5% return in 10 days daily trading despite 0.25% trading cost.

Viaarxiv icon