A physical selfie stick extends the user's reach, enabling the creation of personal photos that include more of the background scene. Conversely a quadcopter can capture photos at distances unattainable for the human, but teloperating a quadcopter to a good viewpoint is a non-trivial task. This paper presents a natural interface for quadcopter photography, the Selfie Drone Stick that allows the user to guide the quadcopter to the optimal vantage point based on the phone's sensors. The user points the phone once, and the quadcopter autonomously flies to the target viewpoint based on the phone camera and IMU sensor data. Visual servoing is achieved through the combination of a dense neural network object detector that matches the image captured from the phone camera to a bounding box in the scene and a Deep Q-Network controller that flies to the desired vantage point. Our deep learning architecture is trained with a combination of real-world images and simulated flight data. Integrating the deep RL controller with an intuitive interface provides a more positive user experience than a standard teleoperation paradigm.
Object detection models based on convolutional neural networks (CNNs) demonstrate impressive performance when trained on large-scale labeled datasets. While a generic object detector trained on such a dataset performs adequately in applications where the input data is similar to user photographs, the detector performs poorly on small objects, particularly ones with limited training data or imaged from uncommon viewpoints. Also, a specific room will have many objects that are missed by standard object detectors, frustrating a robot that continually operates in the same indoor environment. This paper describes a system for rapidly creating customized object detectors. Data is collected from a quadcopter that is teleoperated with an interactive interface. Once an object is selected, the quadcopter autonomously photographs the object from multiple viewpoints to %create training data that is used by DUNet (Dense Upscaled Net), collect data to train DUNet (Dense Upscaled Network), our proposed model for learning customized object detectors from scratch given limited data. Our experiments compare the performance of learning models from scratch with DUNet vs.\ fine tuning existing state of the art object detectors, both on our indoor robotics domain and on standard datasets.