Convolutional blocks have played a crucial role in advancing medical image segmentation by excelling in dense prediction tasks. However, their inability to effectively capture long-range dependencies has limited their performance. Transformer-based architectures, leveraging attention mechanisms, address this limitation by modeling global context and creating expressive feature representations. Recent research has explored this potential by introducing hybrid frameworks that combine transformer encoders with convolutional decoders. Despite their advantages, these approaches face challenges such as limited inductive bias, high computational cost, and reduced robustness to data variability. To overcome these issues, this study introduces CATFA-Net, a novel and efficient segmentation framework designed to produce high-quality segmentation masks while reducing computational costs and increasing inference speed. CATFA-Net employs a hierarchical hybrid encoder architecture with a lightweight convolutional decoder backbone. Its transformer-based encoder uses a new Context Addition Attention mechanism that captures inter-image dependencies without the quadratic complexity of standard attention mechanisms. Features from the transformer branch are fused with those from the convolutional branch through a proposed Cross-Channel Attention mechanism, which helps retain spatial and channel information during downsampling. Additionally, a Spatial Fusion Attention mechanism in the decoder refines features while reducing background noise ambiguity. Extensive evaluations on five publicly available datasets show that CATFA-Net outperforms existing methods in accuracy and efficiency. The framework sets new state-of-the-art Dice scores on GLaS (94.48%) and ISIC 2018 (91.55%). Robustness tests and external validation further demonstrate its strong ability to generalize in binary segmentation tasks.