“Global Capital & Technology Companies Observer“
Global Capital Market Observer
Covering over 4000 global funds, providing global financing, overseas institutional financing, and consulting services
For global financing, overseas institutional financing, and consulting, please contact the assistant
For those who want to obtain the original PDF report, please follow the public account and contact the assistant for communication
Thank you!

Memory is akey component of agentic workflow, closely related to knowledge and analysis. It is worth our attention because it has different granularity and functions compared to“knowledge“ and “analysis“. Analysis defines how an agent interprets who it is (role, “persona“), what it is doing (its behavior model), and where it operates (its environment), while knowledge provides the facts or representations for guiding decisions. Memory connects these elements and actively participates in decision-making as a dynamic record of experiences. We have studied memory for decades, but we still do not fully understand how to enableLLMs to maintain memory over time.Current AI systems can retrieve information, summarize past interactions, and even selectively store details, but they lack a stable, structured memory that reliably persists over time. Today, we will explore a forgotten paper that may provide insights from the past; explain the different types of memory and their roles inagentic workflow; understand how these components combine in practice; clarify how models with memory patterns “remember“ things; and ask ourselves:How is GenAI changing the nature of memory?
The following discussion includes:
-
SOAR in Agentic Memory Systems‘s Legacy: From Cognitive Models to AI Agents Bridge
-
Declarative Knowledge and Procedural Knowledge
-
Chunking
-
Subgoals and Hierarchical Problem Solving
-
SOAR ‘s Legacy and Its Resonance with Modern AI
-
The Types of Memory We Use Today
-
Long-term Memory
-
Short-term Memory
-
Integrating Everything
-
Memory and Generative Agents
-
How ChatGPT “remembers things? Understanding Memory Patterns
-
Concluding Thoughts: The Impact of AI on Human Memory
SOAR in Agentic Memory Systems‘s Legacy: From Cognitive Models to AI Agents Bridge
In 1987, Allen Newell, Paul Rosenbloom, and John Laird proposed SOAR — a general intelligence architecture. Today, we continue to debate the definition of general intelligence, but the authors of SOAR have a clear perspective: general intelligence refers to the ability of a system to handle various cognitive tasks, employ various problem-solving methods, and continuously learn from experience.
When the SOAR architecture was introduced, it was a bold attempt to create a unified cognitive theory that integrated problem-solving, learning, and memory into a single framework. SOAR’s solutions are elegant and introduce a structured approach to memory, resonating with modern AI agent architectures. By distinguishing between working memory (for immediate cognitive tasks) and long-term procedural memory (for learning rules), SOAR anticipated the challenges of building systems that can retain, recall, and refine knowledge over the long term. While modern agentic AI relies more on statistical learning and vector-based retrieval rather than explicit production rules, the fundamental question of how systems remember and improve remains central — this makes SOAR a relevant conceptual pioneer for today’s AI frameworks.
Declarative Knowledge and Procedural Knowledge
One of SOAR’s major innovations is the distinction between two types of knowledge. Declarative knowledge consists of facts and information stored in working memory, representing the system’s current understanding of the environment. In contrast, procedural knowledge exists in long-term memory in the form of production rules, determining the system’s actions. This explicit separation allows SOAR to manage immediate problem-solving tasks while building a persistent strategy library for future use.
Chunking
When a system successfully solves a problem, it integrates the experience into new production rules. This “chunking“ process effectively compresses complex problem-solving processes into reusable knowledge, thereby reducing future computational load and improving efficiency. By internalizing successful strategies, SOAR continuously refines its problem-solving capabilities, much like humans learn from repetitive experiences.
Subgoals and Hierarchical Problem Solving
Another profound aspect of SOAR is its use of automatic subgoals. When SOAR encounters a deadlock (i.e., a situation where existing knowledge is insufficient), it generates a new subgoal to overcome the obstacle. This mechanism of breaking complex problems into simpler, more manageable parts is akin to hierarchical problem-solving methods in human cognition. The concept of subgoals in SOAR influenced later developments, particularly in areas like hierarchical reinforcement learning and multi-agent coordination frameworks.
SOAR ‘s Legacy and Its Resonance with Modern AI
SOAR’s structured approach marks a departure from earlier fragmented cognitive models. By integrating working memory, long-term memory, and learning, it has become a cornerstone of cognitive architecture and influenced approaches to general intelligence. Today, the challenges faced by AI systems driven by deep learning, LLMs and reinforcement learning parallel the issues originally posed by SOAR regarding memory, learning, and problem decomposition. Certain modern AI techniques resonate with SOAR’s subgoal mechanism, especially in hierarchical planning and task decomposition. Similarly, methods like fine-tuning, continual learning, and retrieval-augmented generation align with SOAR’s goals of leveraging past experiences to enhance performance, although their mechanisms differ. SOAR’s structured handling of declarative and procedural knowledge foreshadows the development of modern neuro-symbolic AI, which seeks to combine symbolic reasoning with neural adaptability. This synthesis emphasizes the enduring relevance of structured memory and dynamic learning in the pursuit of general intelligence.
Although deep learning once overshadowed SOAR ‘s structured approach, AI practitioners are now revisiting many of its core ideas. As AI agents struggle with memory, retrieval, and adaptability, SOAR seems less like a relic and more like a precursor to the next wave of autonomous AI advancements.
Another significant work influenced by Allen Newell is the ACT–R architecture, and the integrated theory of the mind written by John R. Anderson and Daniel Bothell.
The Types of Memory We Use Today
Today, AI agents do not process memory as a singular process — it is a structured system composed of different layers, each with unique roles. Some memories persist over the long term, influencing long-term behavior, while others are fleeting, used only for immediate tasks at hand.
Long-term Memory: The Foundation of Persistent Knowledge
At the core of long-term memory are two different types: one is explicit (declarative) memory, involving structured, retrievable knowledge; the other is implicit (non-declarative) memory, which allows learning from past experiences.
Explicit memory enables AI to recall facts, rules, and structured knowledge. In this category, semantic memory is responsible for storing general truths and common knowledge. This is why AI systems can confidently state “The Eiffel Tower is in Paris” or “Dogs are mammals”. This type of memory lays the foundation for knowledge-based AI applications like search engines and chatbots.
Then there is episodic memory, which is more personalized — it captures specific events and experiences, allowing AI to remember the context of past interactions. If AI customer service remembers that a user previously requested a refund, it can adjust its response accordingly, making the interaction feel more intuitive and human-like.
In Memento (the movie), Leonard Shelby’s struggle is with anterograde amnesia. He remembers his life before the injury (related to semantic memory), but cannot store new episodic memories, meaning every new interaction or event will fade within minutes. His reliance on notes, Polaroid photos, and tattoos reflects an externalized temporary memory system — attempting to compensate for his inability to encode new personal experiences. Even with memory capabilities, LLMs cannot store true episodic memories; they can only retrieve patterns and summarize past interactions. On the other hand, implicit memory allows AI to form instincts, driven by procedural memory that helps agents learn skills without explicit recall. Think of a self-driving car that, after thousands of miles of training, improves its lane-keeping abilities. The car does not need to explicitly “remember“ every scene — it has an intuitive understanding of how to drive on the road. When you truly experience this, it feels quite incredible.
Short-term Memory: The Power of the Present
Long-term memory promotes growth and adaptation, while short-term memory ensures that agents remain responsive during real-time interactions. The context window defines how much past input an AI model can retain in a single exchange. This limitation is crucial in LLMs — give an AI a small context window, and it will forget what you just said. If you expand this window, AI can maintain continuity over longer conversations, making responses more coherent and natural. Currently, much research focuses on expanding and optimizing context windows. There is also working memory, which plays a crucial role in multi-step reasoning and decision-making. Just as humans use working memory to hold multiple ideas in mind while solving math problems, AI agents also rely on working memory to process multiple inputs simultaneously. This is particularly important for complex tasks like planning, where AI agents need to balance different pieces of information before making decisions.
Integrating Everything
The interactions between these different types of memory make modern AI agents increasingly effective. Long-term memory allows them to learn from past experiences, short-term memory ensures they focus on the present, and working memory enables them to process multiple inputs simultaneously. These components collectively shape the agents’ ability to act autonomously, adapt intelligently, and provide more meaningful interactions over time. As AI systems continue to evolve, refining their memory management will be key to unlocking more advanced agentic workflows — making them feel more natural, capable, and ultimately more intelligent.
Memory and Generative Agents
One of the most striking examples of memory-driven agentic behavior is found in the paper written by researchers from Google Research and Stanford University titled Generative Agents:Interactive Simulacra of Human Behavior by researchers from Google Research and Stanford. In this work, memory plays a critical role in enabling agents to simulate plausible human behavior. The proposed architecture treats memory as a dynamic component, allowing agents to observe, store, retrieve, and synthesize experiences over time to guide their interactions and decisions.
The memory system here is constructed in the form of a memory stream, allowing agents to continually log their experiences in natural language. These memories are not static; they are regularly retrieved and synthesized into higher-level thoughts, enabling agents to draw broader conclusions about themselves, others, and their environment.
Memory retrieval is constrained by three key factors:
-
Reproducibility (recent memories are more readily available)
-
Importance (highly significant events are prioritized)
-
Relevance (only contextually relevant information will appear in decision-making))
Reflection allows agents to inductively summarize from their experiences, forming insights that influence future behavior. For example, an agent that repeatedly creates musical works might form a self-perception of being a passionate musician. This process enhances long-term consistency, helping agents’ behaviors align with their past interactions and evolving relationships. Planning allows agents to predict future actions based on prior experiences, further integrating memory. Agents generate daily agendas and refine them into detailed action sequences, recursively adjusting them based on new observations.
Research shows that this memory-driven approach generates new social behaviors, such as information diffusion, relationship formation, and coordinated activities. However, this approach still has limitations, including potential memory retrieval failures, hallucinations, and biases inherited from the underlying language models. Ultimately, memory will become the foundation of agents’ credibility, enabling nuanced, dynamic interactions that transcend single-instance language model outputs, allowing generative agents to simulate social behaviors in interactive applications.
When it comes to memory in agentic workflows, we cannot help but mention modern everyday experiences — more specifically, if you do not explicitly tell ChatGPT, how does it remember whether your child is six years old? Even if you explicitly told it, how does it remember, and where is this information stored?
How ChatGPT “remembers things? Understanding Memory Patterns
Most AI chat models’ memory is not as good as Leonard Shelby’s Memento — once the conversation ends, everything resets. This is fine for quick Q&A, but it is frustrating when you want to maintain continuity. Or when you want it to remember your writing style, it can be frustrating. Memory patterns change this situation, allowing ChatGPT to retain key details across different conversations, making interactions feel more like talking to an assistant who truly understands you. This is both eerie and makes you feel heard. It also shortens your interaction time with AI assistant.
How does it work?
After enabling memory, ChatGPT does not store complete conversation logs but extracts key facts and patterns. For example, if you frequently mention that you are writing a book about citizen diplomacy — rather than remembering every instance, it stores “The user is interested in citizen diplomacy and is writing a book”. Thus, when mentioned again, the model will not start from scratch. It also has selectivity — it will not remember everything you say, only what is repeated or explicitly confirmed (e.g., “remember I am doing news summaries”). This keeps memory concise and relevant.
Where is the memory stored?
It is not stored within the model itself. Instead, the aggregated data is securely stored on OpenAI ‘s servers through vector embeddings (compact numerical representations that can be efficiently retrieved). When starting a new conversation, the system searches for relevant past data and integrates it into the dialogue, creating an illusion of continuity. Developers promise that the memory mode is not a black box. Sometimes you may become skeptical, feeling that the chat history has too little memory of you. But you can check, update, or delete the stored information from the chat at any time. In any case, the data is stored in an abstract form, meaning there are no complete verbatim records, only key insights.
The Impact of AI on Human Memory
A paper points out that it has no direct connection to memory in agentic systems, but offers a fascinating and somewhat unsettling perspective on how generative AI is changing the nature of memory itself. Andrew Hoskins in his book AI and Memory notes that AI is not simply extending human memory or aiding recall; rather, it liberates memory from traditional constraints, creating what he calls a “third mode of memory“. In this model, memory is no longer a retrieval act but a constantly reconstructed process, where the past, never truly experienced, is generated, modified, and presented as if real. His argument that AI constructs what Hoskins refers to as a “conversational past“ — an evolving digital representation of memory that exists independently of human agency. Through LLMs and AI-driven services, past events are continuously reinterpreted and blended, blurring the lines between lived experiences and artificially constructed lives. This is especially evident in the rise of AI generated “death robots“, allowing people to interact with digital versions of the deceased, raising ethical and philosophical questions about consent, authenticity, and the permanence of digital legacies.
Beyond personal memory, Hoskins also explores the broader implications of AI-driven transformations on collective historical narratives. As AI reshapes the ways society records, remembers, and even forgets, traditional memory markers — such as archival records, personal recollections, and oral histories — increasingly face the risk of being replaced by AI-generated alternatives that may lack the foundation of lived experience. He warns that as AI reconstructs and repurposes the past in ways that people never consented to, human control over memory will gradually diminish.
While his article focuses on the socio-cultural aspects of AI and memory, the important issues it raises resonate with discussions of memory in agentic systems. Just as AI is changing human memory, it is also redefining how autonomous systems store, retrieve, and utilize knowledge. If AI models can generate memories rather than just retrieve them, what does this mean for agentic workflows that rely on past interactions to inform future decisions? How do we differentiate between experiences learned by AI and reconstructions of past events generated by AI? As we explore the role of memory in AI-driven systems, these are crucial considerations, as the distinction between stored knowledge and dynamically created pasts may become increasingly blurred. This necessitates further reflection on the growing role of AI in shaping collective memory.
“Memory can change the shape of a room; it can change the color of a car. And memories can be distorted. They’re just an interpretation, they’re not a record, and they’re irrelevant if you have the facts.” Leonard Shelby “Memento”… or AI?
<End>
Welcome to the reader group
Real-time discussion on new trends in the Age of Discovery
By X Partners
“Secondary market observers of the primary market”
Advancing and finding the next market-leading unicorn through secondary market scale, vertical industry information, and core data cases
Welcome to leave a message or contact the assistant for more information