Traditional video coding (VVC, HEVC) prioritizes human visual perception, transmitting substantial texture redundancy that severely hinders machine decision-making under constrained bandwidths. In dynamic channels, this redundancy causes severe ``cliff effects'' and prohibitive latency. To address this, we propose a robust multimodal semantic communication framework based on an adaptive Object-Attribute-Relation (O-A-R) hierarchy. Bypassing pixel-level reconstruction entirely, our framework directly fuses visual, textual, and audio streams to construct a decision-oriented topological graph. A bandwidth-adaptive strategy dynamically allocates resources by semantic priority, while a cross-modal mechanism leverages text and audio priors to compensate for severe visual degradation. Experimental results demonstrate that under extreme low bandwidths (1-3 kbps), our method achieves over a 90% bandwidth saving (an approximately 10-fold reduction) compared to state-of-the-art digital schemes, maintaining superior scene-graph accuracy. In deep fading channels (SNR <= 4 dB), it completely eliminates the cliff effect, ensuring graceful degradation by strictly preserving foundational object anchors even when traditional codecs suffer 100% decoding failure. Coupled with an 89\% reduction in end-to-end latency, our framework comprehensively fulfills the real-time survival requirements of embodied agents.