The Active Brain: How Google’s “ATLAS” Rewires Itself to Master Infinite Context

In my last article, we explored how DeepSeek’s Engram effectively gave AI a hippocampus—offloading static facts into a massive, efficient lookup table. It was a breakthrough in separating memory from reasoning. But what if the model didn’t just “look up” memories? What if it actually rewired its own brain while it was reading, optimizing its … Read more

The Future of AI Architecture: How DeepSeek’s “Engram” Module Mimics Human Memory to Supercharge LLMs

AI’s "Hippocampus" Moment: Bridging Biological Memory and Machine Learning Architecture

The race for Artificial General Intelligence (AGI) has hit a bottleneck: efficiency. Standard Large Language Models (LLMs) are “computationally heavy” because they don’t know how to separate thinking from remembering. A groundbreaking research paper from Peking University and DeepSeek-AI—“Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models” (arXiv:2601.07372)—is changing the … Read more

An easy introduction to LLM agents – structure and components

Components of an autonomous AI Agent

LLM based autonomous agents can use Generative AI to automate processes without needing human intervention. There is a subtle distinction between autonomous agents and workflows as explained in this anthropic blog. If you know the series of steps needed to automate a process, then you can use a workflow. However, if you need a system … Read more