The assessment and treatment of motor symptoms such as tremor in Parkinson's disease depends exclusively on the physician's visual observation of standardised movements (i.e. motor tasks). Wearable sensors such as accelerometers are able to detect some manifestations of these pathological signs in movement disorders. Sensor data from motor tasks, however, must be processed sequentially with annotated data from clinical experts. Hence, we designed TreCap, a custom-built wearable device with new software to capture and evaluate motor symptoms such as tremor in real time. Inertial sensor data is systematically processed, stored and tailored to each motor task by this software, including annotated data from clinical rating scores and deep brain stimulation parameters. For prototype testing, the wearable device was validated in a pilot study on subjects with physiological hand tremor. The processed data sets are suitable for machine learning to classify motor tasks. Results on healthy subjects demonstrate an accuracy of 95% with support vector machine algorithms. TreCap software is expandable and allows full access to the configuration of all sensors via Bluetooth. Finally, the functions of the entire device provide a platform to be apt for future clinical trials.
Generating a new font library is a very labor-intensive and time-consuming job for glyph-rich scripts. Few-shot font generation is thus required, as it requires only a few glyph references without fine-tuning during test. Existing methods follow the style-content disentanglement paradigm and expect novel fonts to be produced by combining the style codes of the reference glyphs and the content representations of the source. However, these few-shot font generation methods either fail to capture content-independent style representations, or employ localized component-wise style representations, which is insufficient to model many Chinese font styles that involve hyper-component features such as inter-component spacing and "connected-stroke". To resolve these drawbacks and make the style representations more reliable, we propose a self-supervised cross-modality pre-training strategy and a cross-modality transformer-based encoder that is conditioned jointly on the glyph image and the corresponding stroke labels. The cross-modality encoder is pre-trained in a self-supervised manner to allow effective capture of cross- and intra-modality correlations, which facilitates the content-style disentanglement and modeling style representations of all scales (stroke-level, component-level and character-level). The pre-trained encoder is then applied to the downstream font generation task without fine-tuning. Experimental comparisons of our method with state-of-the-art methods demonstrate our method successfully transfers styles of all scales. In addition, it only requires one reference glyph and achieves the lowest rate of bad cases in the few-shot font generation task 28% lower than the second best
Extracting structure information from dialogue data can help us better understand user and system behaviors. In task-oriented dialogues, dialogue structure has often been considered as transition graphs among dialogue states. However, annotating dialogue states manually is expensive and time-consuming. In this paper, we propose a simple yet effective approach for structure extraction in task-oriented dialogues. We first detect and cluster possible slot tokens with a pre-trained model to approximate dialogue ontology for a target domain. Then we track the status of each identified token group and derive a state transition structure. Empirical results show that our approach outperforms unsupervised baseline models by far in dialogue structure extraction. In addition, we show that data augmentation based on extracted structures enriches the surface formats of training data and can achieve a significant performance boost in dialogue response generation.
Artificial Intelligence for RObust Glaucoma Screening (AIROGS) Challenge is held for developing solutions for glaucoma screening from color fundus photography that are robust to real-world scenarios. This report describes our method submitted to the AIROGS challenge. Our method employs convolutional neural networks to classify input images to "referable glaucoma" or "no referable glaucoma". In addition, we introduce an inference-time out-of-distribution (OOD) detection method to identify ungradable images. Our OOD detection is based on an energy-based method combined with activation rectification.
Fact verification (FV) is a challenging task which aims to verify a claim using multiple evidential sentences from trustworthy corpora, e.g., Wikipedia. Most existing approaches follow a three-step pipeline framework, including document retrieval, sentence retrieval and claim verification. High-quality evidences provided by the first two steps are the foundation of the effective reasoning in the last step. Despite being important, high-quality evidences are rarely studied by existing works for FV, which often adopt the off-the-shelf models to retrieve relevant documents and sentences in an "index-retrieve-then-rank" fashion. This classical approach has clear drawbacks as follows: i) a large document index as well as a complicated search process is required, leading to considerable memory and computational overhead; ii) independent scoring paradigms fail to capture the interactions among documents and sentences in ranking; iii) a fixed number of sentences are selected to form the final evidence set. In this work, we propose GERE, the first system that retrieves evidences in a generative fashion, i.e., generating the document titles as well as evidence sentence identifiers. This enables us to mitigate the aforementioned technical issues since: i) the memory and computational cost is greatly reduced because the document index is eliminated and the heavy ranking process is replaced by a light generative process; ii) the dependency between documents and that between sentences could be captured via sequential generation process; iii) the generative formulation allows us to dynamically select a precise set of relevant evidences for each claim. The experimental results on the FEVER dataset show that GERE achieves significant improvements over the state-of-the-art baselines, with both time-efficiency and memory-efficiency.
The Prevalence of Community support and engagement for different domains in the tech industry has changed and evolved throughout the years. In this study, we aim to understand, analyze and predict the trends of technology in a scientific manner, having collected data on numerous topics and their growth throughout the years in the past decade. We apply machine learning models on collected data, to understand, analyze and forecast the trends in the advancement of different fields. We show that certain technical concepts such as python, machine learning, and Keras have an undisputed uptrend, finally concluding that the Stackindex model forecasts with high accuracy and can be a viable tool for forecasting different tech domains.
Compared with rate-based artificial neural networks, Spiking Neural Networks (SNN) provide a more biological plausible model for the brain. But how they perform supervised learning remains elusive. Inspired by recent works of Bengio et al., we propose a supervised learning algorithm based on Spike-Timing Dependent Plasticity (STDP) for a hierarchical SNN consisting of Leaky Integrate-and-fire (LIF) neurons. A time window is designed for the presynaptic neuron and only the spikes in this window take part in the STDP updating process. The model is trained on the MNIST dataset. The classification accuracy approach that of a Multilayer Perceptron (MLP) with similar architecture trained by the standard back-propagation algorithm.
Classical linear quadratic (LQ) control centers around linear time-invariant (LTI) systems, where the control-state pairs introduce a quadratic cost with time-invariant parameters. Recent advancement in online optimization and control has provided novel tools to study LQ problems that are robust to time-varying cost parameters. Inspired by this line of research, we study the distributed online LQ problem for identical LTI systems. Consider a multi-agent network where each agent is modeled as an LTI system. The LTI systems are associated with decoupled, time-varying quadratic costs that are revealed sequentially. The goal of the network is to make the control sequence of all agents competitive to that of the best centralized policy in hindsight, captured by the notion of regret. We develop a distributed variant of the online LQ algorithm, which runs distributed online gradient descent with a projection to a semi-definite programming (SDP) to generate controllers. We establish a regret bound scaling as the square root of the finite time-horizon, implying that agents reach consensus as time grows. We further provide numerical experiments verifying our theoretical result.
An Autonomous Road Vehicle (ARV) can navigate various types of road networks using inputs such as throttle (acceleration), braking (deceleration), and steering (change of lateral direction). In most ARV driving scenarios that involve normal vehicle traffic and encounters with vulnerable road users (VRUs), ARVs are not required to take evasive action. This paper presents a novel Emergency Obstacle Avoidance Maneuver (EOAM) methodology for ARVs traveling at higher speeds and lower road surface friction, involving time-critical maneuver determination and control. The proposed EOAM Framework offers usage of the ARV's sensing, perception, control, and actuation system abilities as one cohesive system, to accomplish avoidance of an on-road obstacle, based first on performance feasibility and second on passenger comfort, and is designed to be well-integrated within an ARV high-level system. Co-simulation including the ARV EOAM logic in Simulink and a vehicle model in CarSim is conducted with speeds ranging from 55 to 165 km/h and on road surfaces with friction ranging from 1.0 to 0.1. The results are analyzed and given in the context of an entire ARV system, with implications for future work.
In the report we propose six new implementations of ruCLIP model trained on our 240M pairs. The accuracy results are compared with original CLIP model with Ru-En translation (OPUS-MT) on 16 datasets from different domains. Our best implementations outperform CLIP + OPUS-MT solution on most of the datasets in few-show and zero-shot tasks. In the report we briefly describe the implementations and concentrate on the conducted experiments. Inference execution time comparison is also presented in the report.