In this article, we present a high-bandwidth egocentric neuromuscular speech interface for translating silently voiced speech articulations into textand audio. Specifically, we collect electromyogram (EMG) signals from multiple articulatorysites on the face and neck as individuals articulate speech in an alaryngeal manner to perform EMG-to-text or EMG-to-audio translation. Such an interface is useful for restoring audible speech in individuals who have lost the ability to speak intelligibly due to laryngectomy, neuromuscular disease, stroke, or trauma-induced damage (e.g., radiotherapy toxicity) to speech articulators. Previous works have focused on training text or speech synthesis models using EMG collected during audible speech articulations or by transferring audio targets from EMG collected during audible articulation to EMG collected during silent articulation. However, such paradigms are not suited for individuals who have already lost the ability to audibly articulate speech. We are the first to present an alignment-free EMG-to-text and EMG-to-audio conversion using only EMG collected during silently articulated speech in an open-sourced manner. On a limited vocabulary corpora, our approach achieves almost 2.4x improvement in word error rate with a model that is 25x smaller by leveraging the inherent geometry of EMG.