Abstract:In recent years, many attempts have been made to enhance Orthogonal Frequency Multiplexing with Index Modulation (OFDM-IM) in terms of spectral efficiency and error performance. Two challenges typically erupt when using OFDM-IM. First, the degradation in spectral efficiency due to the subcarrier's deactivation, especially when using higher order modulation (M-ary) where every inactive subcarrier will cost $Log_2(M)$ bits loss. Second, using a fixed number of active subcarriers within a sub-block forces the error to be localized within the sub-block. Yet, it loses the advantage of exploiting all possible pattern combinations degrading the overall spectral efficiency. In this paper, we introduce a solution to tackle those problems. The Enhanced Generalized Index Modulation (EGIM) is a simple systematic way to generate and detect the OFDM-IM frame. Unlike the classical OFDM-IM generation by splitting the frame into sub-frames which increases the complexity of the OFDM-IM transmitter and reflects on the receiver Maximum likelihood detector, EGIM Makes full use of all possible combinations of active subcarriers within the frame by using variable active subcarriers (k) depending on the incoming data. The EGIM is still susceptible to error propagation if the OFF symbol is wrongly mapped to one of the ON symbols or vice versa. For that reason, we offer an OFDM-IM autoencoder to overcome this problem. The encoder generates the (ON/OFF) symbols systematically to achieve the advantage of sending all possible frame indices patterns depending on the input bit stream offering an average of 3dB gain in terms of power efficiency. The proposed encoder performance was compared to the standard encoder with the same effective coding rate using soft and hard decision Viterbi decoding utilizing the power gain achieved.
Abstract:The SoccerNet 2023 challenges were the third annual video understanding challenges organized by the SoccerNet team. For this third edition, the challenges were composed of seven vision-based tasks split into three main themes. The first theme, broadcast video understanding, is composed of three high-level tasks related to describing events occurring in the video broadcasts: (1) action spotting, focusing on retrieving all timestamps related to global actions in soccer, (2) ball action spotting, focusing on retrieving all timestamps related to the soccer ball change of state, and (3) dense video captioning, focusing on describing the broadcast with natural language and anchored timestamps. The second theme, field understanding, relates to the single task of (4) camera calibration, focusing on retrieving the intrinsic and extrinsic camera parameters from images. The third and last theme, player understanding, is composed of three low-level tasks related to extracting information about the players: (5) re-identification, focusing on retrieving the same players across multiple views, (6) multiple object tracking, focusing on tracking players and the ball through unedited video streams, and (7) jersey number recognition, focusing on recognizing the jersey number of players from tracklets. Compared to the previous editions of the SoccerNet challenges, tasks (2-3-7) are novel, including new annotations and data, task (4) was enhanced with more data and annotations, and task (6) now focuses on end-to-end approaches. More information on the tasks, challenges, and leaderboards are available on https://www.soccer-net.org. Baselines and development kits can be found on https://github.com/SoccerNet.