Abstract:Automated Program Repair (APR) has advanced rapidly with Large Language Models (LLMs), but most existing methods remain computationally expensive, and focused on a small set of languages. Ruby, despite its widespread use in web development and the persistent challenges faced by its developers, has received little attention in APR research. In this paper, we introduce RAMP, a novel lightweight framework that formulates program repair as a feedback-driven, iterative process for Ruby. RAMP employs a team of collaborative agents that generate targeted tests, reflect on errors, and refine candidate fixes until a correct solution is found. Unlike prior approaches, RAMP is designed to avoid reliance on large multilingual repair databases or costly fine-tuning, instead operating directly on Ruby through lightweight prompting and test-driven feedback. Evaluation on the XCodeEval benchmark shows that RAMP achieves a pass@1 of 67% on Ruby, outper-forming prior approaches. RAMP converges quickly within five iterations, and ablation studies confirm that test generation and self-reflection are key drivers of its performance. Further analysis shows that RAMP is particularly effective at repairing wrong answers, compilation errors, and runtime errors. Our approach provides new insights into multi-agent repair strategies, and establishes a foundation for extending LLM-based debugging tools to under-studied languages.
Abstract:The software industry is experiencing a surge in the adoption of Continuous Integration (CI) practices, both in commercial and open-source environments. CI practices facilitate the seamless integration of code changes by employing automated building and testing processes. Some frameworks, such as Travis CI and GitHub Actions have significantly contributed to simplifying and enhancing the CI process, rendering it more accessible and efficient for development teams. Despite the availability these CI tools , developers continue to encounter difficulties in accurately flagging commits as either suitable for CI execution or as candidates for skipping especially for large projects with many dependencies. Inaccurate flagging of commits can lead to resource-intensive test and build processes, as even minor commits may inadvertently trigger the Continuous Integration process. The problem of detecting CI-skip commits, can be modeled as binary classification task where we decide to either build a commit or to skip it. This study proposes a novel solution that leverages Deep Reinforcement Learning techniques to construct an optimal Decision Tree classifier that addresses the imbalanced nature of the data. We evaluate our solution by running a within and a cross project validation benchmark on diverse range of Open-Source projects hosted on GitHub which showcased superior results when compared with existing state-of-the-art methods.
Abstract:As cyber threats and malware attacks increasingly alarm both individuals and businesses, the urgency for proactive malware countermeasures intensifies. This has driven a rising interest in automated machine learning solutions. Transformers, a cutting-edge category of attention-based deep learning methods, have demonstrated remarkable success. In this paper, we present BERTroid, an innovative malware detection model built on the BERT architecture. Overall, BERTroid emerged as a promising solution for combating Android malware. Its ability to outperform state-of-the-art solutions demonstrates its potential as a proactive defense mechanism against malicious software attacks. Additionally, we evaluate BERTroid on multiple datasets to assess its performance across diverse scenarios. In the dynamic landscape of cybersecurity, our approach has demonstrated promising resilience against the rapid evolution of malware on Android systems. While the machine learning model captures broad patterns, we emphasize the role of manual validation for deeper comprehension and insight into these behaviors. This human intervention is critical for discerning intricate and context-specific behaviors, thereby validating and reinforcing the model's findings.