Researchers have published a new neural-symbolic architecture that extends Fuzzy Cognitive Maps to handle non-monotonic causal relationships — a class of real-world dynamics that standard models have long struggled to represent accurately.
Fuzzy Cognitive Maps (FCMs) are a well-established method for modelling complex systems by representing concepts as nodes and causal relationships as weighted edges in a graph. Their appeal lies in interpretability: unlike black-box neural networks, FCMs allow domain experts to read and reason about the model's structure. They have been applied across fields including healthcare, ecology, and engineering. However, a core limitation has persisted: standard FCMs use fixed scalar weights and monotonic activation functions, meaning they can only represent causal effects that consistently increase or decrease — they cannot capture relationships that reverse direction or plateau depending on conditions.
The Problem with Monotonic Causality
Many real-world phenomena are fundamentally non-monotonic. A textbook example is the Yerkes-Dodson law, which describes how performance improves with arousal up to a point, then declines — an inverted-U shape no standard FCM can faithfully represent. Similarly, systems governed by saturation effects or periodic dynamics fall outside what conventional FCM architectures can model. Previous attempts to address this have typically involved adding hidden layers or increasing graph complexity, both of which erode the interpretability that makes FCMs valuable in the first place.
This fundamental modification shifts the non-linearity from the nodes' aggregation phase directly to the causal influence phase.
The new paper, posted to arXiv in April 2025, proposes the Kolmogorov-Arnold Fuzzy Cognitive Map (KA-FCM) as a direct solution. Rather than attaching a single scalar weight to each edge, the architecture places a learnable, univariate B-spline function on each causal connection. B-splines are smooth, flexible mathematical curves that can approximate arbitrary shapes — meaning each edge can now encode a non-linear, non-monotonic relationship between two concepts.
What Kolmogorov-Arnold Brings to the Architecture
The design draws on the Kolmogorov-Arnold representation theorem, a mathematical result stating that any continuous multivariate function can be decomposed into a combination of univariate functions. This theorem has recently attracted attention in machine learning as the theoretical basis for Kolmogorov-Arnold Networks (KANs), which move learnable functions from nodes to edges. The KA-FCM applies this same principle within the FCM framework, preserving the graph structure and interpretability of traditional FCMs while expanding their expressive power.
Critically, the researchers emphasise that this modification does not increase graph density or introduce hidden layers. The model remains a single-layer graph — every relationship is still directly readable as an edge — but each edge now carries a function rather than a number. According to the authors, this means the learned causal laws can be explicitly extracted and examined, rather than being buried inside opaque weight matrices.
Tested Against Baselines Across Three Domains
The team validated KA-FCM against two baselines: a standard FCM trained with Particle Swarm Optimization (PSO), a common learning algorithm for these models, and a Multi-Layer Perceptron (MLP), representing a general-purpose neural network. Three test scenarios were chosen to probe different capabilities.
The first was the Yerkes-Dodson non-monotonic inference task — directly targeting the core limitation of standard FCMs. The second was symbolic regression, testing whether the model could recover explicit mathematical expressions from data. The third was chaotic time-series forecasting, a demanding benchmark involving systems highly sensitive to initial conditions.
According to the paper, KA-FCMs outperformed conventional FCM architectures across all three domains. Performance relative to MLPs was described as competitive — meaning the interpretable model approached the accuracy of a black-box approximator without matching it in every case. These benchmarks are self-reported by the authors and have not yet undergone independent peer review.
Interpretability as a First-Class Feature
One of the study's notable claims is that KA-FCMs do not merely perform better — they also remain interpretable in a meaningful sense. Because the B-spline functions on edges can be plotted and analysed, a practitioner can inspect the exact shape of a causal relationship the model has learned. In the symbolic regression experiments, the authors report being able to extract recognisable mathematical laws directly from the trained edges.
This positions KA-FCM within a growing research trend: architectures that attempt to combine the accuracy of deep learning with the transparency demanded in regulated or high-stakes domains such as medicine, climate modelling, and policy analysis. Standard neural networks remain more powerful general approximators, but their internal representations are difficult to audit or explain to non-specialists.
The FCM community has historically prioritised explainability, and KA-FCM's design preserves that priority while closing part of the accuracy gap with black-box methods. Whether practitioners working with existing FCM pipelines will adopt the added complexity of spline-based edges remains an open question.
What This Means
For researchers and engineers using Fuzzy Cognitive Maps in domains where causal relationships don't follow simple linear or monotonic patterns, KA-FCM offers a technically grounded path to more accurate models that doesn't require abandoning interpretability for performance.