Abstract:Recent advancements in large language models (LLMs) demonstrate exceptional Chinese text processing capabilities, particularly in Chinese Spelling Correction (CSC). While LLMs outperform traditional BERT-based models in accuracy and robustness, challenges persist in reliability and generalization. This paper proposes CEC-Zero, a novel reinforcement learning (RL) framework enabling LLMs to self-correct through autonomous error strategy learning without external supervision. By integrating RL with LLMs' generative power, the method eliminates dependency on annotated data or auxiliary models. Experiments reveal RL-enhanced LLMs achieve industry-viable accuracy and superior cross-domain generalization, offering a scalable solution for reliability optimization in Chinese NLP applications. This breakthrough facilitates LLM deployment in practical Chinese text correction scenarios while establishing a new paradigm for self-improving language models.
Abstract:Graph embedding techniques have led to significant progress in recent years. However, present techniques are not effective enough to capture the patterns of networks. This paper propose neighbor2vec, a neighbor-based sampling strategy used algorithm to learn the neighborhood representations of node, a framework to gather the structure information by feature propagation between the node and its neighbors. We claim that neighbor2vec is a simple and effective approach to enhancing the scalability as well as equality of graph embedding, and it breaks the limits of the existing state-of-the-art unsupervised techniques. We conduct experiments on several node classification and link prediction tasks for networks such as ogbn-arxiv, ogbn-products, ogbn-proteins, ogbl-ppa,ogbl-collab and ogbl-citation2. The result shows that Neighbor2vec's representations provide an average accuracy scores up to 6.8 percent higher than competing methods in node classification tasks and 3.0 percent higher in link prediction tasks. The neighbor2vec's representations are able to outperform all baseline methods and two classical GNN models in all six experiments.