Abstract:DESTINY+ is an upcoming JAXA Epsilon medium-class mission to flyby multiple asteroids including Phaethon. As an asteroid flyby observation instrument, a telescope mechanically capable of single-axis rotation, named TCAP, is mounted on the spacecraft to track and observe the target asteroids during flyby. As in past flyby missions utilizing rotating telescopes, TCAP is also used as a navigation camera for autonomous optical navigation during the closest-approach phase. To mitigate the degradation of the navigation accuracy, past missions performed calibration of the navigation camera's alignment before starting optical navigation. However, such calibration requires significant operational time to complete and imposes constraints on the operation sequence. From the above background, the DESTINY+ team has studied the possibility of reducing operational costs by allowing TCAP alignment errors to remain. This paper describes an autonomous optical navigation algorithm robust to the misalignment of rotating telescopes, proposed in this context. In the proposed method, the misalignment of the telescope is estimated simultaneously with the spacecraft's orbit relative to the flyby target. To deal with the nonlinearity between the misalignment and the observation value, the proposed method utilizes the unscented Kalman filter, instead of the extended Kalman filter widely used in past studies. The proposed method was evaluated with numerical simulations on a PC and with hardware-in-the-loop simulation, taking the Phaethon flyby in the DESTINY+ mission as an example. The validation results suggest that the proposed method can mitigate the misalignment-induced degradation of the optical navigation accuracy with reasonable computational costs suited for onboard computers.
Abstract:For the diagnosis of Chinese medicine, tongue segmentation has reached a fairly mature point, but it has little application in the eye diagnosis of Chinese medicine.First, this time we propose Res-UNet based on the architecture of the U2Net network, and use the Data Enhancement Toolkit based on small datasets, Finally, the feature blocks after noise reduction are fused with the high-level features.Finally, the number of network parameters and inference time are used as evaluation indicators to evaluate the model. At the same time, different eye data segmentation frames were compared using Miou, Precision, Recall, F1-Score and FLOPS. To convince people, we cite the UBIVIS. V1 public dataset this time, in which Miou reaches 97.8%, S-measure reaches 97.7%, F1-Score reaches 99.09% and for 320*320 RGB input images, the total parameter volume is 167.83 MB,Due to the excessive number of parameters, we experimented with a small-scale U2Net combined with a Res module with a parameter volume of 4.63 MB, which is similar to U2Net in related indicators, which verifies the effectiveness of our structure.which achieves the best segmentation effect in all the comparison networks and lays a foundation for the application of subsequent visual apparatus recognition symptoms.