Abstract:LLM-based agents often operate in a greedy, step-by-step manner, selecting actions solely based on the current observation without considering long-term consequences or alternative paths. This lack of foresight is particularly problematic in web environments, which are only partially observable-limited to browser-visible content (e.g., DOM and UI elements)-where a single misstep often requires complex and brittle navigation to undo. Without an explicit backtracking mechanism, agents struggle to correct errors or systematically explore alternative paths. Tree-search methods provide a principled framework for such structured exploration, but existing approaches lack mechanisms for safe backtracking, making them prone to unintended side effects. They also assume that all actions are reversible, ignoring the presence of irreversible actions-limitations that reduce their effectiveness in realistic web tasks. To address these challenges, we introduce WebOperator, a tree-search framework that enables reliable backtracking and strategic exploration. Our method incorporates a best-first search strategy that ranks actions by both reward estimates and safety considerations, along with a robust backtracking mechanism that verifies the feasibility of previously visited paths before replaying them, preventing unintended side effects. To further guide exploration, WebOperator generates action candidates from multiple, varied reasoning contexts to ensure diverse and robust exploration, and subsequently curates a high-quality action set by filtering out invalid actions pre-execution and merging semantically equivalent ones. Experimental results on WebArena and WebVoyager demonstrate the effectiveness of WebOperator. On WebArena, WebOperator achieves a state-of-the-art 54.6% success rate with gpt-4o, underscoring the critical advantage of integrating strategic foresight with safe execution.
Abstract:Lightning, a common feature of severe meteorological conditions, poses significant risks, from direct human injuries to substantial economic losses. These risks are further exacerbated by climate change. Early and accurate prediction of lightning would enable preventive measures to safeguard people, protect property, and minimize economic losses. In this paper, we present DeepLight, a novel deep learning architecture for predicting lightning occurrences. Existing prediction models face several critical limitations: they often struggle to capture the dynamic spatial context and inherent uncertainty of lightning events, underutilize key observational data, such as radar reflectivity and cloud properties, and rely heavily on Numerical Weather Prediction (NWP) systems, which are both computationally expensive and highly sensitive to parameter settings. To overcome these challenges, DeepLight leverages multi-source meteorological data, including radar reflectivity, cloud properties, and historical lightning occurrences through a dual-encoder architecture. By employing multi-branch convolution techniques, it dynamically captures spatial correlations across varying extents. Furthermore, its novel Hazy Loss function explicitly addresses the spatio-temporal uncertainty of lightning by penalizing deviations based on proximity to true events, enabling the model to better learn patterns amidst randomness. Extensive experiments show that DeepLight improves the Equitable Threat Score (ETS) by 18%-30% over state-of-the-art methods, establishing it as a robust solution for lightning prediction.




Abstract:Crime prediction is a widely studied research problem due to its importance in ensuring safety of city dwellers. Starting from statistical and classical machine learning based crime prediction methods, in recent years researchers have focused on exploiting deep learning based models for crime prediction. Deep learning based crime prediction models use complex architectures to capture the latent features in the crime data, and outperform the statistical and classical machine learning based crime prediction methods. However, there is a significant research gap in existing research on the applicability of different models in different real-life scenarios as no longitudinal study exists comparing all these approaches in a unified setting. In this paper, we conduct a comprehensive experimental evaluation of all major state-of-the-art deep learning based crime prediction models. Our evaluation provides several key insights on the pros and cons of these models, which enables us to select the most suitable models for different application scenarios. Based on the findings, we further recommend certain design practices that should be taken into account while building future deep learning based crime prediction models.




Abstract:Accuracy and interpretability are two essential properties for a crime prediction model. Because of the adverse effects that the crimes can have on human life, economy and safety, we need a model that can predict future occurrence of crime as accurately as possible so that early steps can be taken to avoid the crime. On the other hand, an interpretable model reveals the reason behind a model's prediction, ensures its transparency and allows us to plan the crime prevention steps accordingly. The key challenge in developing the model is to capture the non-linear spatial dependency and temporal patterns of a specific crime category while keeping the underlying structure of the model interpretable. In this paper, we develop AIST, an Attention-based Interpretable Spatio Temporal Network for crime prediction. AIST models the dynamic spatio-temporal correlations for a crime category based on past crime occurrences, external features (e.g., traffic flow and point of interest (POI) information) and recurring trends of crime. Extensive experiments show the superiority of our model in terms of both accuracy and interpretability using real datasets.