Semantic segmentation has attracted a large amount of attention in recent years. In robotics, segmentation can be used to identify a region of interest, or \emph{target area}. For example, in the RoboCup Standard Platform League (SPL), segmentation separates the soccer field from the background and from players on the field. For satellite or vehicle applications, it is often necessary to find certain regions such as roads, bodies of water or kinds of terrain. In this paper, we propose a novel approach to real-time target area segmentation based on a newly designed spatial temporal network. The method operates under domain constraints defined by both the robot's hardware and its operating environment . The proposed network is able to run in real-time, working within the constraints of limited run time and computing power. This work is compared against other real time segmentation methods on a dataset generated by a Nao V6 humanoid robot simulating the RoboCup SPL competition. In this case, the target area is defined as the artificial grass field. The method is also tested on a maritime dataset collected by a moving vessel, where the aim is to separate the ocean region from the rest of the image. This dataset demonstrates that the proposed model can generalise to a variety of vision problems.
In RoboCup SPL, soccer field segmentation has been widely recognised as one of the most critical robot vision problems. Key challenges include dynamic light condition, different calibration status for individual robot, various camera prospective and more. In this paper, we propose a dataset that contains 20 videos recorded with Nao V5/V6 humanroid robots by team rUNSWift under different circumstances. Each of the videos contains several consecutive high resolution frames and the corresponding labels for field. We propose this dataset to provide training data for the league to overcome field segmentation problem. The dataset will be available online for download. Details of annotation and example of usage will be explained in later sections.