Large Language Models (LLMs) are powerful, but their limitations become clear in multi-turn interactions: they lose track of context, repeat mistakes, and forget what matters. Lately, developers have relied on context engineering—clever prompt design, retrieval pipelines, and compression—to work around these constraints. But context alone is ephemeral. To build agents that are reliable, believable, and capable, we need to move beyond context and into memory engineering. This talk introduces memory engineering as the natural progression of context engineering, exploring how to design systems where data is intentionally transformed into persistent, structured memory that agents can learn from, recall, and adapt with over time. We’ll walk through the data→memory pipeline, types of agent memory (short-term, long-term, shared), and practical strategies like reflection, consolidation, and managed forgetting. Finally, we extend the conversation with a Context Engineering++ perspective—a holistic view of how memory, context, and attention can be engineered together to enable the next generation of agentic systems. Attendees will leave with a clear framework for evolving from prompt engineering to context engineering to memory engineering, and practical guidance on how to architect agents that don’t just respond, but remember, adapt, and grow.
Richmond Alake is the Director of AI Developer Experience at Oracle, leading AI Developer Outreach and Marketing across Oracle’s data and AI ecosystem. His mission is to ensure developers worldwide understand and adopt the capabilities of the Oracle AI Database, including vector search, in-database machine learning, JSON Relational Duality, and the broader integration ecosystem that powers modern AI and agentic applications. A recognized expert in memory engineering for AI agents, Richmond helps developers navigate the shift from prompt engineering to context engineering and now to persistent memory architectures—a foundational step in building adaptive, reliable, and context-aware AI systems. Before joining Oracle, Richmond served as a Developer Advocate (AI/ML) at MongoDB, where he led initiatives at the intersection of data, developer experience, and generative AI. As the creator of MemoRizz, an open-source framework for building memory-augmented AI agents, he has shaped the industry’s understanding of how external memory transforms LLMs from stateless chatbots into evolving agentic systems. Richmond has authored 200+ articles on AI systems and developer experience, and produced courses and technical content for NVIDIA, Neptune AI, O’Reilly, and DeepLearning.AI. He also collaborated with Andrew Ng on DeepLearning.AI’s popular Retrieval-Augmented Generation (RAG) course. At Oracle, Richmond focuses on expanding developer awareness, adoption, and community engagement around the Oracle AI Database and its integrations with leading frameworks such as LangChain, LlamaIndex, CrewAI, Mem0, and others. His work bridges research, engineering practice, and developer education—helping builders create reliable, believable, and capable agentic AI on top of Oracle’s unified data and memory infrastructure.