We numerically demonstrate a silicon add-drop microring-based reservoir computing scheme that combines parallel delayed inputs and wavelength division multiplexing. The scheme solves memory-demanding tasks like time-series prediction with good performance without requiring external optical feedback.
We numerically demonstrate a microring-based time-delay reservoir computing scheme that simultaneously solves three tasks involving time-series prediction, classification, and wireless channel equalization. Each task performed on a wavelength-multiplexed channel achieves state-of-the-art performance with optimized power and frequency detuning.
Microring resonators (MRRs) are promising devices for time-delay photonic reservoir computing, but the impact of the different physical effects taking place in the MRRs on the reservoir computing performance is yet to be fully understood. We numerically analyze the impact of linear losses as well as thermo-optic and free-carrier effects relaxation times on the prediction error of the time-series task NARMA-10. We demonstrate the existence of three regions, defined by the input power and the frequency detuning between the optical source and the microring resonance, that reveal the cavity transition from linear to nonlinear regimes. One of these regions offers very low error in time-series prediction under relatively low input power and number of nodes while the other regions either lack nonlinearity or become unstable. This study provides insight into the design of the MRR and the optimization of its physical properties for improving the prediction performance of time-delay reservoir computing.
End-to-end learning has become a popular method for joint transmitter and receiver optimization in optical communication systems. Such approach may require a differentiable channel model, thus hindering the optimization of links based on directly modulated lasers (DMLs). This is due to the DML behavior in the large-signal regime, for which no analytical solution is available. In this paper, this problem is addressed by developing and comparing differentiable machine learning-based surrogate models. The models are quantitatively assessed in terms of root mean square error and training/testing time. Once the models are trained, the surrogates are then tested in a numerical equalization setup, resembling a practical end-to-end scenario. Based on the numerical investigation conducted, the convolutional attention transformer is shown to outperform the other models considered.
Low-complexity neural networks (NNs) have successfully been applied for digital signal processing (DSP) in short-reach intensity-modulated directly detected optical links, where chromatic dispersion-induced impairments significantly limit the transmission distance. The NN-based equalizers are usually optimized independently from other DSP components, such as matched filtering. This approach may result in lower equalization performance. Alternatively, optimizing a NN equalizer to perform functionalities of multiple DSP blocks may increase transmission reach while keeping the complexity low. In this work, we propose a low-complexity NN that performs samples-to-symbol equalization, meaning that the NN-based equalizer includes match filtering and downsampling. We compare it to a samples-to-sample equalization approach followed by match filtering and downsampling in terms of performance and computational complexity. Both approaches are evaluated using three different types of NNs combined with optical preprocessing. We numerically and experimentally show that the proposed samples-to-symbol equalization approach applied for 32 GBd on-off keying (OOK) signals outperforms the samples-domain alternative keeping the computational complexity low. Additionally, the different types of NN-based equalizers are compared in terms of performance with respect to computational complexity.
We present and experimentally evaluate using transfer learning to address experimental data scarcity when training neural network (NN) models for Mach-Zehnder interferometer mesh-based optical matrix multipliers. Our approach involves pre-training the model using synthetic data generated from a less accurate analytical model and fine-tuning with experimental data. Our investigation demonstrates that this method yields significant reductions in modeling errors compared to using an analytical model, or a standalone NN model when training data is limited. Utilizing regularization techniques and ensemble averaging, we achieve < 1 dB root-mean-square error on the matrix weights implemented by a photonic chip while using only 25% of the available data.
A many-to-one mapping geometric constellation shaping scheme is proposed with a fixed modulation format, fixed FEC engine and rate adaptation with an arbitrarily small step. An autoencoder is used to optimize the labelings and constellation points' positions.
We quantify the impact of thermo-optic and free-carrier effects on time-delay reservoir computing using a silicon microring resonator. We identify pump power and frequency detuning ranges with NMSE less than 0.05 for the NARMA-10 task depending on the time constants of the two considered effects.
Advanced digital signal processing techniques in combination with ultra-wideband balanced coherent detection have enabled a new generation of ultra-high speed fiber-optic communication systems, by moving most of the processing functionalities into digital domain. In this paper, we demonstrate how digital signal processing techniques, in combination with ultra-wideband balanced coherent detection can enable optical frequency comb noise characterization techniques with novel functionalities. We propose a measurement method based on subspace tracking, in combination with multi-heterodyne coherent detection, for independent phase noise sources identification, separation and measurement. Our proposed measurement technique offers several benefits. First, it enables the separation of the total phase noise associated with a particular comb-line or -lines into multiple independent phase noise terms associated with different noise sources. Second, it facilitates the determination of the scaling of each independent phase noise term with comb-line number. Our measurement technique can be used to: identify the most dominant source of phase noise; gain a better understanding of the physics behind the phase noise accumulation process; and confirm, already existing, and enable better phase noise models. In general, our measurement technique provides new insights into noise behavior of optical frequency combs.
The end-to-end optimization of links based on directly-modulated lasers may require an analytically differentiable channel. We overcome this problem by developing and comparing differentiable laser models based on machine learning techniques.