Multi-hop question answering (QA) requires models to perform structured and interpretable reasoning across multiple documents, posing significant challenges in zero-shot settings where task-specific supervision is unavailable. Existing approaches predominantly fall into two categories: purely neural models, which lack transparency and struggle to generalize to unseen reasoning structures, and symbolic systems, which offer interpretability but rely on static, predefined rule templates that limit adaptability. While recent neuro-symbolic frameworks attempt to combine the strengths of both paradigms, they often operate with fixed symbolic programs that cannot evolve during inference. To address this limitation, we propose a context-mutated neuro-symbolic reasoning framework that dynamically adapts symbolic rule chains based on contextual feedback collected during inference. Our method incrementally mutates symbolic reasoning steps in response to inconsistencies between predicted and expected inference paths, enabling real-time adaptation to novel or shifted question structures. This feedback-driven design enhances robustness and generalization under distributional shifts while preserving symbolic interpretability. Our system consists of three core components: a neural retriever, a symbolic rule evolution engine, and a verifier module that grounds symbolic outputs into final answers. We focus on the modeling and algorithmic design of dynamic symbolic mutation, with empirical evaluation on the HotpotQA benchmark demonstrating the effectiveness of





