R2IF: Aligning Reasoning with Decisions via Composite Rewards for Interpretable LLM Function Calling

Add code
Apr 22, 2026

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: