Micro-computed tomography (micro-CT) is a widely used state-of-the-art instrument employed to study the morphological structures of objects in various fields. Object-rotation is a classical scanning mode in micro-CT allowing data acquisition from different angles; however, its field-of-view (FOV) is primarily constrained by the size of the detector when aiming for high spatial resolution imaging. Recently, we introduced a novel scanning mode called multiple source translation CT (mSTCT), which effectively enlarges the FOV of the micro-CT system. Furthermore, we developed a virtual projection-based filtered backprojection (V-FBP) algorithm to address truncated projection, albeit with a trade-off in acquisition efficiency (high resolution reconstruction typically requires thousands of source samplings). In this paper, we present a new algorithm for mSTCT reconstruction, backprojection-filtration (BPF), which enables reconstructions of high-resolution images with a low source sampling ratio. Additionally, we found that implementing derivatives in BPF along different directions (source and detector) yields two distinct BPF algorithms (S-BPF and D-BPF), each with its own reconstruction performance characteristics. Through simulated and real experiments conducted in this paper, we demonstrate that achieving same high-resolution reconstructions, D-BPF can reduce source sampling by 75% compared with V-FBP. S-BPF shares similar characteristics with V-FBP, where the spatial resolution is primarily influenced by the source sampling.
While electronic health records are a rich data source for biomedical research, these systems are not implemented uniformly across healthcare settings and significant data may be missing due to healthcare fragmentation and lack of interoperability between siloed electronic health records. Considering that the deletion of cases with missing data may introduce severe bias in the subsequent analysis, several authors prefer applying a multiple imputation strategy to recover the missing information. Unfortunately, although several literature works have documented promising results by using any of the different multiple imputation algorithms that are now freely available for research, there is no consensus on which MI algorithm works best. Beside the choice of the MI strategy, the choice of the imputation algorithm and its application settings are also both crucial and challenging. In this paper, inspired by the seminal works of Rubin and van Buuren, we propose a methodological framework that may be applied to evaluate and compare several multiple imputation techniques, with the aim to choose the most valid for computing inferences in a clinical research work. Our framework has been applied to validate, and extend on a larger cohort, the results we presented in a previous literature study, where we evaluated the influence of crucial patients' descriptors and COVID-19 severity in patients with type 2 diabetes mellitus whose data is provided by the National COVID Cohort Collaborative Enclave.
Most of the quadrupeds developed are highly actuated, and their control is hence quite cumbersome. They need advanced electronics equipment to solve convoluted inverse kinematic equations continuously. In addition, they demand special and costly sensors to autonomously navigate through the environment as traditional distance sensors usually fail because of the continuous perturbation due to the motion of the robot. Another challenge is maintaining the continuous dynamic stability of the robot while walking, which requires complicated and state-of-the-art control algorithms. This paper presents a thorough description of the hardware design and control architecture of our in-house prismatic joint quadruped robot called the PRISMA. We aim to forge a robust and kinematically stable quadruped robot that can use elementary control algorithms and utilize conventional sensors to navigate an unknown environment. We discuss the benefits and limitations of the robot in terms of its motion, different foot trajectories, manufacturability, and controls.
Crowd movement guidance has been a fascinating problem in various fields, such as easing traffic congestion in unusual events and evacuating people from an emergency-affected area. To grab the reins of crowds, there has been considerable demand for a decision support system that can answer a typical question: ``what will be the outcomes of each of the possible options in the current situation. In this paper, we consider the problem of estimating the effects of crowd movement guidance from past data. To cope with limited amount of available data biased by past decision-makers, we leverage two recent techniques in deep representation learning for spatial data analysis and causal inference. We use a spatial convolutional operator to extract effective spatial features of crowds from a small amount of data and use balanced representation learning based on the integral probability metrics to mitigate the selection bias and missing counterfactual outcomes. To evaluate the performance on estimating the treatment effects of possible guidance, we use a multi-agent simulator to generate realistic data on evacuation scenarios in a crowded theater, since there are no available datasets recording outcomes of all possible crowd movement guidance. The results of three experiments demonstrate that our proposed method reduces the estimation error by at most 56% from state-of-the-art methods.
On April 13th, 2019, OpenAI Five became the first AI system to defeat the world champions at an esports game. The game of Dota 2 presents novel challenges for AI systems such as long time horizons, imperfect information, and complex, continuous state-action spaces, all challenges which will become increasingly central to more capable AI systems. OpenAI Five leveraged existing reinforcement learning techniques, scaled to learn from batches of approximately 2 million frames every 2 seconds. We developed a distributed training system and tools for continual training which allowed us to train OpenAI Five for 10 months. By defeating the Dota 2 world champion (Team OG), OpenAI Five demonstrates that self-play reinforcement learning can achieve superhuman performance on a difficult task.
We introduce a framework that uses Generative Adversarial Networks (GANs) to study cognitive properties like memorability, aesthetics, and emotional valence. These attributes are of interest because we do not have a concrete visual definition of what they entail. What does it look like for a dog to be more or less memorable? GANs allow us to generate a manifold of natural-looking images with fine-grained differences in their visual attributes. By navigating this manifold in directions that increase memorability, we can visualize what it looks like for a particular generated image to become more or less memorable. The resulting ``visual definitions" surface image properties (like ``object size") that may underlie memorability. Through behavioral experiments, we verify that our method indeed discovers image manipulations that causally affect human memory performance. We further demonstrate that the same framework can be used to analyze image aesthetics and emotional valence. Visit the GANalyze website at http://ganalyze.csail.mit.edu/.
We use reinforcement learning (RL) to learn dexterous in-hand manipulation policies which can perform vision-based object reorientation on a physical Shadow Dexterous Hand. The training is performed in a simulated environment in which we randomize many of the physical properties of the system like friction coefficients and an object's appearance. Our policies transfer to the physical robot despite being trained entirely in simulation. Our method does not rely on any human demonstrations, but many behaviors found in human manipulation emerge naturally, including finger gaiting, multi-finger coordination, and the controlled use of gravity. Our results were obtained using the same distributed RL system that was used to train OpenAI Five. We also include a video of our results: https://youtu.be/jwSbzNHGflM
This paper presented a genetic algorithm (GA) to solve the container storage problem in the port. This problem is studied with different container types such as regular, open side, open top, tank, empty and refrigerated containers. The objective of this problem is to determine an optimal containers arrangement, which respects customers delivery deadlines, reduces the rehandle operations of containers and minimizes the stop time of the container ship. In this paper, an adaptation of the genetic algorithm to the container storage problem is detailed and some experimental results are presented and discussed. The proposed approach was compared to a Last In First Out (LIFO) algorithm applied to the same problem and has recorded good results
Short abstracts by computational linguistics researchers at the University of Pennsylvania describing ongoing individual and joint projects.