The recent revolutionary progress in text-based large language models (LLMs) has contributed to the growth of interest in extending capabilities of such models to multimodal perception and understanding tasks. Hearing is an essential capability that is highly desired to be integrated into LLMs. However, effective integrating listening capabilities into LLMs is a significant challenge lying in generalizing complex auditory tasks across speech and sounds. To address these issues, we introduce Cryfish, our version of auditory-capable LLM. The model integrates WavLM audio-encoder features into Qwen2 model using a transformer-based connector. Cryfish is adapted to various auditory tasks through a specialized training strategy. We evaluate the model on the new Dynamic SUPERB Phase-2 comprehensive multitask benchmark specifically designed for auditory-capable models. The paper presents an in-depth analysis and detailed comparison of Cryfish with the publicly available models.