This volume contains the papers presented at the fifth workshop on Answer Set Programming and Other Computing Paradigms (ASPOCP 2012) held on September 4th, 2012 in Budapest, co-located with the 28th International Conference on Logic Programming (ICLP 2012). It thus continues a series of previous events co-located with ICLP, aiming at facilitating the discussion about crossing the boundaries of current ASP techniques in theory, solving, and applications, in combination with or inspired by other computing paradigms.
The AgreementMaker system was the leading system in the anatomy task of the Ontology Alignment Evaluation Initiative (OAEI) competition in 2011. While AgreementMaker did not compete in OAEI 2012, here we report on its performance in the 2012 anatomy task, using the same configurations of AgreementMaker submitted to OAEI 2011. Additionally, we also test AgreementMaker using an updated version of the UBERON ontology as a mediating ontology, and otherwise identical configurations. AgreementMaker achieved an F-measure of 91.8% with the 2011 configurations, and an F-measure of 92.2% with the updated UBERON ontology. Thus, AgreementMaker would have been the second best system had it competed in the anatomy task of OAEI 2012, and only 0.1% below the F-measure of the best system.
The first competition on the binarization of historical Persian documents and manuscripts (PHIBC 2012) has been organized in conjunction with the first Iranian conference on pattern recognition and image analysis (PRIA 2013). The main objective of PHIBC 2012 is to evaluate performance of the binarization methodologies, when applied on the Persian heritage images. This paper provides a report on the methodology and performance of the three submitted algorithms based on evaluation measures has been used.
Summarizing web graphs is challenging due to the heterogeneity of the modeled information and its changes over time. We investigate the use of neural networks for lifelong graph summarization. Assuming we observe the web graph at a certain time, we train the networks to summarize graph vertices. We apply this trained network to summarize the vertices of the changed graph at the next point in time. Subsequently, we continue training and evaluating the network to perform lifelong graph summarization. We use the GNNs Graph-MLP and GraphSAINT, as well as an MLP baseline, to summarize the temporal graphs. We compare $1$-hop and $2$-hop summaries. We investigate the impact of reusing parameters from a previous snapshot by measuring the backward and forward transfer and the forgetting rate of the neural networks. Our extensive experiments on ten weekly snapshots of a web graph with over $100$M edges, sampled in 2012 and 2022, show that all networks predominantly use $1$-hop information to determine the summary, even when performing $2$-hop summarization. Due to the heterogeneity of web graphs, in some snapshots, the $2$-hop summary produces over ten times more vertex summaries than the $1$-hop summary. When using the network trained on the last snapshot from 2012 and applying it to the first snapshot of 2022, we observe a strong drop in accuracy. We attribute this drop over the ten-year time warp to the strongly increased heterogeneity of the web graph in 2022.
In this project we work on creating a model to classify images for the Pascal VOC Challenge 2012. We use convolutional neural networks trained on a single GPU instance provided by Amazon via their cloud service Amazon Web Services (AWS) to classify images in the Pascal VOC 2012 data set. We train multiple convolutional neural network models and finally settle on the best model which produced a validation accuracy of 85.6% and a testing accuracy of 85.24%.
This is the Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence, which was held on Catalina Island, CA August 14-18 2012.
Background. Dengue outbreaks are a major public health issue, with Brazil reporting 71% of global cases in 2024. Purpose. This study aims to describe the profile of severe dengue patients admitted to Brazilian Intensive Care units (ICUs) (2012-2024), assess trends over time, describe new onset complications while in ICU and determine the risk factors at admission to develop complications during ICU stay. Methods. We performed a prospective study of dengue patients from 253 ICUs across 56 hospitals. We used descriptive statistics to describe the dengue ICU population, logistic regression to identify risk factors for complications during the ICU stay, and a machine learning framework to predict the risk of evolving to complications. Visualisations were generated using ISARIC VERTEX. Results. Of 11,047 admissions, 1,117 admissions (10.1%) evolved to complications, including non-invasive (437 admissions) and invasive ventilation (166), vasopressor (364), blood transfusion (353) and renal replacement therapy (103). Age>80 (OR: 3.10, 95% CI: 2.02-4.92), chronic kidney disease (OR: 2.94, 2.22-3.89), liver cirrhosis (OR: 3.65, 1.82-7.04), low platelets (<50,000 cells/mm3; OR: OR: 2.25, 1.89-2.68), and high leukocytes (>7,000 cells/mm3; OR: 2.47, 2.02-3.03) were significant risk factors for complications. A machine learning tool for predicting complications was proposed, showing accurate discrimination and calibration. Conclusion. We described a large cohort of dengue patients admitted to ICUs and identified key risk factors for severe dengue complications, such as advanced age, presence of comorbidities, higher level of leukocytes and lower level of platelets. The proposed prediction tool can be used for early identification and targeted interventions to improve outcomes in dengue-endemic regions.
In LegenDary project, we started a new research based on Agent2D in RoboCup 2D soccer simulation. In this paper, we mainly present the team algorithms and structures which we used to develop our team in separated section. We have focused on passing, dribbling and blocking skills. We improved them and made the team ready for this competition. Through pass is the most important part of our team that we work a lot on it.
In this paper, we tackle the problem of visual categorization of dog breeds, which is a surprisingly challenging task due to simultaneously present low interclass distances and high intra-class variances. Our approach combines several techniques well known in our community but often not utilized for fine-grained recognition: (1) automatic segmentation, (2) efficient part detection, and (3) combination of multiple features. In particular, we demonstrate that a simple head detector embedded in an off-the-shelf recognition pipeline can improve recognition accuracy quite significantly, highlighting the importance of part features for fine-grained recognition tasks. Using our approach, we achieved a 24.59% mean average precision performance on the Stanford dog dataset.
The semantics of emoji has, to date, been considered from a static perspective. We offer the first longitudinal study of how emoji semantics changes over time, applying techniques from computational linguistics to six years of Twitter data. We identify five patterns in emoji semantic development and find evidence that the less abstract an emoji is, the more likely it is to undergo semantic change. In addition, we analyse select emoji in more detail, examining the effect of seasonality and world events on emoji semantics. To aid future work on emoji and semantics, we make our data publicly available along with a web-based interface that anyone can use to explore semantic change in emoji.