From Neuromorphic AI to Neurosymbolic AI Investment

02/10/2026

Neurosymbolic AI - key to AI+Industrial

Neurosymbolic models, which integrate structured reasoning, rules and logical constraints into learning systems, are emerging as a critical path forward if AI is to achieve its potential in industrial applications. 

AI-driven capital investment

Artificial intelligence has driven unprecedented market expansion in recent years. According to the IMF’s reported metrics, the combined market capitalisation of the ten largest U.S. companies now dominates the global equity market, due to AI-driven expectations rather than underlying earnings. Goldman Sachs also reported that since 2023, AI hardware spending alone has risen by approximately $300bn, fuelling a surge in data-centre investment and computing infrastructure.

Much of the AI-infrastructure build-out has been debt-financed. In practice, this creates a circular revenue dynamic in which capital is reinvested into compute faster than monetisation can mature. In the U.S. base case, roughly $4 trillion in data-centre spending would require $8 trillion in revenue to justify returns by 2030. In a more aggressive scenario, $10 trillion of capex would need to generate $20 trillion in downstream value in the same period.

Limitations of neuromorphic AI

Investors may never realise value on these investments, however, as the limits of AI in industrial applications remain unresolved. These models are unable to reason independently, enforce constraints and scale reliably in real-world systems. The transition toward neurosymbolic models, which integrate structured reasoning, rules and logical constraints into learning systems, is therefore emerging as a critical path forward. For investors, the key signal is evidence of a genuine shift from neuromorphic to neurosymbolic AI architectures, validating sustained progress in industrial AI.

Expansion and Limitations of VLA

The current AI expansion has been driven by steady advances in perception-based systems since the late 1990s, particularly in text, image, and speech recognition. In benchmark tests performed by Kiela et. al (2023), large language and vision models now approach human-level performance in narrow tasks. Positive results in such scale-driven progress have generated renewed confidence.

Recent development, however, have exposed the limits of this trajectory. One prominent example of overpromising occurred with OpenAI’s GPT-5 launch, which had to retract its initial claim of PhD-level reasoning within days. This incident, one of many, has reinforced market scepticism toward capability over-extrapolation and the assumption that larger models alone can sustain exponential gains in computing power and performance.

Moreover, training frontier-scale models now requires extreme computing power, while training data itself remains finite. This marks a transition point: from AI driven by scale to AI constrained by architecture, economics, and real-world deployment.

Humanoid robotics is much harder than autonomous driving

Humanoid robotics, which has seen booming investor interest in the past year, is an area where the fragile economics of AI will be particularly apparent.

Humanoid robots share many components with autonomous vehicles such as GPUs, batteries, sensors, and vision systems. But industry leaders such as Nvidia admit that, unlike vehicles, humanoids must interact continuously with people and unstructured environments, perform undefined tasks, operate with limited battery and compute capacity, and manage balance and fall risks.

are often priced below $20k, compared with automated vehicles costing $40k+, compressing already fragile unit economics. For general investors and CVCs alike, this creates a mismatch between near-term expectations and long-cycle realities.

The economic consequences are already visible. Approximately 75% of recent AI infrastructure expansion has been financed through borrowing, reinforcing a feedback loop in which AI companies sell compute to one another to justify continued expansion. While this model may be sustainable for software platforms, it is uncompromising for physical systems that require manufacturing, deployment, servicing, and long-term reliability.

As a result, humanoid robotics is likely at the peak of inflated expectations on the Gartner Hype Cycle, which will likely be followed by a period of disillusionment, similar to autonomous vehicles.

Three signals that investors should pay attention to

Recent progress in autonomous systems and task-specific deployment of humanoids has produced measurable near-term value, particularly in AI-augmented industrial workflows. Yet this progress remains constrained by the need for purpose-built architecture, sound economics, and domain-specific processes.

Three signals investors should pay attention to:

1.Scale alone will not compound AI ability indefinitely. Training more complex LLMs requires up ever more computational resources, while the marginal gains from additional scale diminish. Larger models amplify cost and complexity without solving core limitations.

2.Neuromorphic AI remains a black box. Neuromorphic systems excel at pattern recognition across text, images, and speech. However, they remain black-box systems, limited to the data distributions they learn from. As tasks grow more complex, corner cases multiply, and outputs reflect correlation rather than explicit logic.

3.Embodied AI clearly exposes limitations. Autonomous vehicles and humanoid robots are embodied forms of neuromorphic AI. Despite decades of progress, dating back to the DARPA Grand Challenge in 2004, development has been slower than anticipated. After nearly 20 years, there are still no fully autonomous Level-4 personal vehicles. Waymo’s growth highlights both progress and constraint. Its fleet still relies on remote human operators, currently at roughly 1 operator per 3 vehicles, with forecasts of 1 per 10 vehicles by 2030. This underscores the persistent gap between perception and true autonomy. 

Neurosymbolic AI is key to AI + Industrial

The path forward increasingly depends on advances in neurosymbolic AI, which offers systems that combine neural learning with structured reasoning, rules, and logic constraints. 

Neurosymbolic architectures combine:

1.Neural models for pattern recognition.

2.LLM-based language understanding, entity extraction, and causal chain prediction.

3.Symbolic ontologies, rules, and knowledge graphs for structured and constraint-based reasoning.

This approach offers meaningful advantages: reduced hallucination, stronger generalisation, improved explainability, and significantly lower data requirements. Crucially, it enables systems to understand “why”, not just “what.”

Application case of Neurosymbolic AI

Early industrial applications illustrate this shift to neurosymbolic AI, demonstrating a different AI trajectory where AI is integrated into existing industrial workflows, rather than relying on speculative bets. Citrine Informatics, founded in 2013, applies materials-aware AI to materials discovery and optimisation. By embedding domain knowledge into its models, Citrine enables 5–10× faster product development, 80% reductions in customer response time, and dynamic portfolio rationalisation, all while requiring far less compute than frontier LLMs.

Another example, Innoaero, has also used AI-enabled R&D to discover a new copper alloy, achieving 10× efficiency gains over traditional methods and reaching commercial sales within one year of seed financing.

These businesses have demonstrated applied AI that bridges relevant length scales of material science modelling. These applications are having a big impact on the real world and fundamentally changing materials discovery despite accessing a smaller compute than LLM. In order to succeed in capturing durable returns as the sector matures, investors must recalibrate their expectations to focus on integration, capital efficiency, and reasoning-enabled systems.