In this paper, we are interested in developing semantic parsers which understand natural language questions embedded in a conversation with a user and ground them to formal queries over definitions in a general purpose knowledge graph (KG) with very large vocabularies (covering thousands of concept names and relations, and millions of entities). To this end, we develop a dataset where user questions are annotated with Sparql parses and system answers correspond to execution results thereof. We present two different semantic parsing approaches and highlight the challenges of the task: dealing with large vocabularies, modelling conversation context, predicting queries with multiple entities, and generalising to new questions at test time. We hope our dataset will serve as useful testbed for the development of conversational semantic parsers. Our dataset and models are released at https://github.com/EdinburghNLP/SPICE.
Arranging music for a different set of instruments that it was originally written for is traditionally a tedious and time-consuming process, performed by experts with intricate knowledge of the specific instruments and involving significant experimentation. In this paper we study the problem of automating music arrangements for music pieces written for monophonic instruments or voices. We designed and implemented an algorithm that can always produce a music arrangement when feasible by transposing the music piece to a different scale, permuting the assigned parts to instruments/voices, and transposing individual parts by one or more octaves. We also published open source software written in Python that processes MusicXML files and allows musicians to experiment with music arrangements. It is our hope that our software can serve as a platform for future extensions that will include music reductions and inclusion of polyphonic instruments.
In 1987, Eric Horvitz, Greg Cooper, and I visited I.J. Good at his university. We wanted to see him was not because he worked with Alan Turing to help win WWII by decoding encrypted messages from the Germans, although that certainly intrigued us. Rather, we wanted to see him because we had just finished reading his book "Good Thinking," which summarized his life's work in Probability and its Applications. We were graduate students at Stanford working in AI, and amazed that his thinking was so similar to ours, having worked decades before us and coming from such a seemingly different perspective not involving AI. This story is a fitting introduction this manuscript. Now having years to look back on my work, to boil it down to its essence, and to better appreciate its significance (if any) in the evolution of AI and ML, I realized it was time to put my work in perspective, providing a roadmap to any who would like to explore it. After I had this realization, it occurred to me that this is what I.J. Good did in his book. This manuscript is for those who want to understand basic concepts central to ML and AI and to learn about early applications of these concepts. Ironically, after I finished writing this manuscript, I realized that a lot of the concepts that I included are missing in modern courses on ML. I hope this work will help to make up for these omissions. The presentation gets somewhat technical in parts, but I've tried to keep the math to the bare minimum. In addition to the technical presentations, I include stories about how the ideas came to be and the effects they have had. When I was a student in physics, I was given dry texts to read. In class, however, several of my physics professors would tell stories around the work. Those stories fascinated me and really made the theory stick. So here, I do my best to present both the ideas and the stories behind them.
Various congestion control protocols have been designed to achieve high performance in different network environments. Modern online learning solutions that delegate the congestion control actions to a machine cannot properly converge in the stringent time scales of data centers. We leverage multiagent reinforcement learning to design a system for dynamic tuning of congestion control parameters at end-hosts in a data center. The system includes agents at the end-hosts to monitor and report the network and traffic states, and agents to run the reinforcement learning algorithm given the states. Based on the state of the environment, the system generates congestion control parameters that optimize network performance metrics such as throughput and latency. As a case study, we examine BBR, an example of a prominent recently-developed congestion control protocol. Our experiments demonstrate that the proposed system has the potential to mitigate the problems of static parameters.
Numerical approximations of partial differential equations (PDEs) are routinely employed to formulate the solution of physics, engineering and mathematical problems involving functions of several variables, such as the propagation of heat or sound, fluid flow, elasticity, electrostatics, electrodynamics, and more. While this has led to solving many complex phenomena, there are still significant limitations. Conventional approaches such as Finite Element Methods (FEMs) and Finite Differential Methods (FDMs) require considerable time and are computationally expensive. In contrast, machine learning-based methods such as neural networks are faster once trained, but tend to be restricted to a specific discretization. This article aims to provide a comprehensive summary of conventional methods and recent machine learning-based methods to approximate PDEs numerically. Furthermore, we highlight several key architectures centered around the neural operator, a novel and fast approach (1000x) to learning the solution operator of a PDE. We will note how these new computational approaches can bring immense advantages in tackling many problems in fundamental and applied physics.
It is well established that increasing scale in deep transformer networks leads to improved quality and performance. This increase in scale often comes with an increase in compute cost and inference latency. Consequently, research into methods which help realize the benefits of increased scale without leading to an increase in the compute cost becomes important. We introduce Alternating Updates (AltUp), a simple-to-implement method to increase a model's capacity without the computational burden. AltUp enables the widening of the learned representation without increasing the computation time by working on a subblock of the representation at each layer. Our experiments on various transformer models and language tasks demonstrate the consistent effectiveness of alternating updates on a diverse set of benchmarks. Finally, we present extensions of AltUp to the sequence dimension, and demonstrate how AltUp can be synergistically combined with existing approaches, such as Sparse Mixture-of-Experts models, to obtain efficient models with even higher capacity.
The formal privacy guarantee provided by Differential Privacy (DP) bounds the leakage of sensitive information from deep learning models. In practice, however, this comes at a severe computation and accuracy cost. The recently established state of the art (SOTA) results in image classification under DP are due to the use of heavy data augmentation and large batch sizes, leading to a drastically increased computation overhead. In this work, we propose to use more efficient models with improved feature quality by introducing steerable equivariant convolutional networks for DP training. We demonstrate that our models are able to outperform the current SOTA performance on CIFAR-10 by up to $9\%$ across different $\varepsilon$-values while reducing the number of model parameters by a factor of $35$ and decreasing the computation time by more than $90 \%$. Our results are a large step towards efficient model architectures that make optimal use of their parameters and bridge the privacy-utility gap between private and non-private deep learning for computer vision.
Dense real-time tracking and mapping from RGB-D images is an important tool for many robotic applications, such as navigation and manipulation. The recently presented Directional Truncated Signed Distance Function (DTSDF) is an augmentation of the regular TSDF that shows potential for more coherent maps and improved tracking performance. In this work, we present methods for rendering depth- and color images from the DTSDF, making it a true drop-in replacement for the regular TSDF in established trackers. We evaluate the algorithm on well-established datasets and observe that our method improves tracking performance and increases re-usability of mapped scenes. Furthermore, we add color integration which notably improves color-correctness at adjacent surfaces. Our novel formulation of combined ICP with frame-to-keyframe photometric error minimization further improves tracking results. Lastly, we introduce Sim3 point-to-plane ICP for refining pose priors in a multi-sensor scenario with different scale factors.
Open radio access network (ORAN) provides an open architecture to implement radio access network (RAN) of the fifth generation (5G) and beyond mobile communications. As a key technology for the evolution to the sixth generation (6G) systems, cell-free massive multiple-input multiple-output (CF-mMIMO) can effectively improve the spectrum efficiency, peak rate and reliability of wireless communication systems. Starting from scalable implementation of CF-mMIMO, we study a cell-free RAN (CF-RAN) under the ORAN architecture. Through theoretical analysis and numerical simulation, we investigate the uplink and downlink spectral efficiencies of CF-mMIMO with the new architecture. We then discuss the implementation issues of CF-RAN under ORAN architecture, including time-frequency synchronization and over-the-air reciprocity calibration, low layer splitting, deployment of ORAN radio units (O-RU), artificial intelligent based user associations. Finally, we present some representative experimental results for the uplink distributed reception and downlink coherent joint transmission of CF-RAN with commercial off-the-shelf O-RUs.
This work establishes rigorous, novel and widely applicable stability guarantees and transferability bounds for graph convolutional networks -- without reference to any underlying limit object or statistical distribution. Crucially, utilized graph-shift operators (GSOs) are not necessarily assumed to be normal, allowing for the treatment of networks on both directed- and for the first time also undirected graphs. Stability to node-level perturbations is related to an 'adequate (spectral) covering' property of the filters in each layer. Stability to edge-level perturbations is related to Lipschitz constants and newly introduced semi-norms of filters. Results on stability to topological perturbations are obtained through recently developed mathematical-physics based tools. As an important and novel example, it is showcased that graph convolutional networks are stable under graph-coarse-graining procedures (replacing strongly-connected sub-graphs by single nodes) precisely if the GSO is the graph Laplacian and filters are regular at infinity. These new theoretical results are supported by corresponding numerical investigations.