A new artificial intelligence laboratory spun out of Harvard University is in talks to raise approximately $100 million from investors to develop AI-powered memory technology, according to people familiar with the matter reported by Bloomberg Technology.

The lab is led by Gabriel Kreiman, a Harvard neuroscientist whose research sits at the intersection of brain science and machine learning. The venture's stated ambition — building "a world where humans can remember everything" — signals a push toward persistent, long-term memory architectures that neither current AI models nor human cognition reliably provide.

Why AI Memory Is a Multi-Billion Dollar Problem

Memory remains one of the most significant unsolved challenges in AI development. Today's large language models process information within a limited context window and retain nothing between sessions unless explicitly engineered to do so. Humans, meanwhile, forget the vast majority of what they experience. A technology that bridges both gaps would have applications ranging from personal productivity tools and medical diagnostics to enterprise knowledge management.

"A world where humans can remember everything" — the lab's stated mission points to a market that touches almost every vertical in AI.

The timing of the fundraise reflects sustained investor appetite for foundational AI infrastructure in 2026, with venture capital continuing to flow toward labs that combine academic credibility with commercial ambition. A $100 million raise at seed or Series A stage would position Kreiman's lab among the better-capitalized AI research spinouts in the current market.

What Kreiman's Neuroscience Background Brings

Kreiman's academic work has focused on visual neuroscience and the computational mechanisms underlying memory and perception in the human brain. That background matters here. Most AI memory research approaches the problem from a pure engineering angle — larger context windows, retrieval-augmented generation, external database integration. A neuroscience-led approach could pursue architectures that more closely mirror biological memory consolidation, potentially yielding systems that are more efficient, more robust, or more naturally integrated with human users.

Harvard spinouts carry institutional credibility that can accelerate both fundraising and talent acquisition. The university has produced a string of high-profile AI ventures, and Kreiman's profile within the neuroscience community gives the lab access to a research network that pure-play tech startups typically lack.

The Competitive Landscape for Memory AI

Kreiman's lab enters a field that larger players are already investing in heavily. OpenAI has introduced persistent memory features in ChatGPT. Google DeepMind and Meta AI are both pursuing long-context and memory-augmented model architectures. Several well-funded startups, including Mem and various retrieval-augmented generation platforms, are also competing in adjacent spaces.

The differentiation for Kreiman's venture likely lies in the depth of its neuroscientific approach and its ambition to address memory at a more fundamental level than feature-layer additions to existing models. Whether that translates into a defensible technical advantage will depend heavily on what the $100 million is used to build and how quickly the team can demonstrate results.

Details on the lab's current valuation, lead investors, or timeline to close the round were not disclosed in Bloomberg's reporting. The fundraising talks are ongoing, according to people familiar with the matter, meaning terms and total capital raised could change.

What This Means

If Kreiman's lab closes the $100 million raise and delivers on its technical premise, it could establish a new category of AI infrastructure — one where persistent, human-scale memory becomes a standard capability rather than a bolted-on feature, reshaping how AI systems are designed and deployed across industries.