Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

Learning Determinantal Point Processes in Sublinear Time

Oct 19, 2016
Christophe Dupuy, Francis Bach

We propose a new class of determinantal point processes (DPPs) which can be manipulated for inference and parameter learning in potentially sublinear time in the number of items. This class, based on a specific low-rank factorization of the marginal kernel, is particularly suited to a subclass of continuous DPPs and DPPs defined on exponentially many items. We apply this new class to modelling text documents as sampling a DPP of sentences, and propose a conditional maximum likelihood formulation to model topic proportions, which is made possible with no approximation for our class of DPPs. We present an application to document summarization with a DPP on $2^{500}$ items.

* Under review for AISTATS 2017 

  Access Paper or Ask Questions

Self-motions of pentapods with linear platform

Oct 16, 2015
Georg Nawratil, Josef Schicho

We give a full classification of all pentapods with linear platform possessing a self-motion beside the trivial rotation about the platform. Recent research necessitates a contemporary and accurate re-examination of old results on this topic given by Darboux, Mannheim, Duporcq and Bricard, which also takes the coincidence of platform anchor points into account. For our study we use bond theory with respect to a novel kinematic mapping for pentapods with linear platform, beside the method of singular-invariant leg-rearrangements. Based on our results we design pentapods with linear platform, which have a simplified direct kinematics concerning their number of (real) solutions.

* 28 pages, 5 figures 

  Access Paper or Ask Questions

Qualitative Robustness of Support Vector Machines

Nov 03, 2011
Robert Hable, Andreas Christmann

Support vector machines have attracted much attention in theoretical and in applied statistics. Main topics of recent interest are consistency, learning rates and robustness. In this article, it is shown that support vector machines are qualitatively robust. Since support vector machines can be represented by a functional on the set of all probability measures, qualitative robustness is proven by showing that this functional is continuous with respect to the topology generated by weak convergence of probability measures. Combined with the existence and uniqueness of support vector machines, our results show that support vector machines are the solutions of a well-posed mathematical problem in Hadamard's sense.


  Access Paper or Ask Questions

Speculation on graph computation architectures and computing via synchronization

Apr 24, 2004
Bayle Shanks

A speculative overview of a future topic of research. The paper is a collection of ideas concerning two related areas: 1) Graph computation machines ("computing with graphs"). This is the class of models of computation in which the state of the computation is represented as a graph or network. 2) Arc-based neural networks, which store information not as activation in the nodes, but rather by adding and deleting arcs. Sometimes the arcs may be interpreted as synchronization. Warnings to readers: this is not the sort of thing that one might submit to a journal or conference. No proofs are presented. The presentation is informal, and written at an introductory level. You'll probably want to wait for a more concise presentation.

* 61 pages. Informal, rambling. (replacment changed only abstract) 

  Access Paper or Ask Questions

A Cross-media Retrieval System for Lecture Videos

Sep 13, 2003
Atsushi Fujii, Katunobu Itou, Tomoyosi Akiba, Tetsuya Ishikawa

We propose a cross-media lecture-on-demand system, in which users can selectively view specific segments of lecture videos by submitting text queries. Users can easily formulate queries by using the textbook associated with a target lecture, even if they cannot come up with effective keywords. Our system extracts the audio track from a target lecture video, generates a transcription by large vocabulary continuous speech recognition, and produces a text index. Experimental results showed that by adapting speech recognition to the topic of the lecture, the recognition accuracy increased and the retrieval accuracy was comparable with that obtained by human transcription.

* Proceedings of the 8th European Conference on Speech Communication and Technology (Eurospeech 2003), pp.1149-1152, Sep. 2003 

  Access Paper or Ask Questions

Parallel Parking: Optimal Entry and Minimum Slot Dimensions

May 05, 2022
Jiri Vlasak, Michal Sojka, Zdeněk Hanzálek

The problem of path planning for automated parking is usually presented as finding a collision-free path from initial to goal positions, where three out of four parking slot edges represent obstacles. We rethink the path planning problem for parallel parking by decomposing it into two independent parts. The topic of this paper is finding optimal parking slot entry positions. Path planning from initial to entry position is out of scope here. We show the relation between entry positions, parking slot dimensions, and the number of backward-forward direction changes. This information can be used as an input to optimize other parts of the automated parking process.

* Proceedings of the 8th International Conference on Vehicle Technology and Intelligent Transport Systems - VEHITS, pages 300-307, INSTICC, SciTePress, 2022 
* 14 pages (title + 9 of paper + 4 of appendix), 11 (paper) + 4 (appendix) figures, sent to VEHITS 2022 conference 

  Access Paper or Ask Questions

Iterated Vector Fields and Conservatism, with Applications to Federated Learning

Sep 08, 2021
Zachary Charles, Keith Rush

We study iterated vector fields and investigate whether they are conservative, in the sense that they are the gradient of some scalar-valued function. We analyze the conservatism of various iterated vector fields, including gradient vector fields associated to loss functions of generalized linear models. We relate this study to optimization and derive novel convergence results for federated learning algorithms. In particular, we show that for certain classes of functions (including non-convex functions), federated averaging is equivalent to gradient descent on a surrogate loss function. Finally, we discuss a variety of open questions spanning topics in geometry, dynamical systems, and optimization.


  Access Paper or Ask Questions

How Lexical Gold Standards Have Effects On The Usefulness Of Text Analysis Tools For Digital Scholarship

May 31, 2021
Jussi Karlgren

This paper describes how the current lexical similarity and analogy gold standards are built to conform to certain ideas about what the models they are designed to evaluate are used for. Topical relevance has always been the most important target notion for information access tools and related language technology technologies, and while this has proven a useful starting point for much of what information technology is used for, it does not always align well with other uses to which technologies are being put, most notably use cases from digital scholarship in the humanities or social sciences. This paper argues for more systematic formulation of requirements from the digital humanities and social sciences and more explicit description of the assumptions underlying model design.


  Access Paper or Ask Questions

Pros and Cons of GAN Evaluation Measures: New Developments

Mar 17, 2021
Ali Borji

This work is an update of a previous paper on the same topic published a few years ago. With the dramatic progress in generative modeling, a suite of new quantitative and qualitative techniques to evaluate models has emerged. Although some measures such as Inception Score, Fr\'echet Inception Distance, Precision-Recall, and Perceptual Path Length are relatively more popular, GAN evaluation is not a settled issue and there is still room for improvement. For example, in addition to quality and diversity of synthesized images, generative models should be evaluated in terms of bias and fairness. I describe new dimensions that are becoming important in assessing models, and discuss the connection between GAN evaluation and deepfakes.

* 27 pages, 21 figures 

  Access Paper or Ask Questions

MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization

Mar 12, 2021
Chenguang Zhu, Yang Liu, Jie Mei, Michael Zeng

MediaSum, a large-scale media interview dataset consisting of 463.6K transcripts with abstractive summaries. To create this dataset, we collect interview transcripts from NPR and CNN and employ the overview and topic descriptions as summaries. Compared with existing public corpora for dialogue summarization, our dataset is an order of magnitude larger and contains complex multi-party conversations from multiple domains. We conduct statistical analysis to demonstrate the unique positional bias exhibited in the transcripts of televised and radioed interviews. We also show that MediaSum can be used in transfer learning to improve a model's performance on other dialogue summarization tasks.

* North American Chapter of the Association for Computational Linguistics (NAACL), Mexico City, Mexico, 2021 
* Dataset: https://github.com/zcgzcgzcg1/MediaSum/ 

  Access Paper or Ask Questions

<<
200
201
202
203
204
205
206
207
208
209
210
211
212
>>