The Ebbinghaus Forgetting Curve shows that we rapidly forget new information over time unless we actively review it, with the steepest drop in memory occurring within the first 24 hours, then leveling off, demonstrating memory decay is fast initially but slows down.
German psychologist Hermann Ebbinghaus pioneered this concept by studying his own memory, finding that regular, spaced repetition is key to strengthening memories and slowing this natural forgetting process, a principle known as the spacing effect.
Key aspects of the curve
Rapid initial decline: Most forgetting happens soon after learning, with significant memory loss in the first day.
Slowing decay: The rate of forgetting decreases over time; you lose less new information as time goes on.
“Use it or lose it”: Unused memories are pruned by the brain, but recalling information strengthens neural pathways.
Spaced Repetition: Reviewing material at increasing intervals (e.g., 1 day, 3 days, 1 week) significantly improves long-term retention.
How to combat the forgetting curve
Review strategically: Interrupt the curve by reviewing information shortly after learning (e.g., within 24 hours) and then at spaced intervals.
Use active recall: Engage in self-testing, practice quizzes, or use flashcards to force your brain to retrieve the information.
Make it engaging: Use interactive content, real-world examples, or context to make learning more memorable.
Focus on gaps: Use review sessions to identify what you don’t know and spend more time on those weak areas.
The fear of moving and difficulty with spatial prediction after a traumatic brain injury (TBI) are common symptoms of visual-spatial processing dysfunction and anxiety. These issues stem from the brain having trouble interpreting and organizing visual information, which can affect balance, posture, and a person’s sense of their position in the world.
Understanding the Challenges
Spatial Disorientation: A TBI can disrupt the brain’s ability to create a consistent “map” of the environment and your body’s place within it (spatial orientation).
Navigation Problems: You might get lost in familiar places, have trouble following directions, or feel panicked in new settings, as the ability to learn and remember routes (wayfinding) is impaired.
Balance and Coordination Issues: The brain’s difficulty integrating visual information with sensory input for balance can lead to clumsiness, an increased risk of falls, and a general feeling of instability.
Anxiety and Fear: The unpredictability of these symptoms can naturally lead to significant anxiety or fear of movement and leaving a “safe” space. This is the nervous system’s protective response to perceived danger.
Strategies for Coping and Management
It is important to work with a team of healthcare professionals specializing in TBI to develop a personalized approach.
Seek Professional Help:
Neuro-visual specialists and occupational therapists can provide targeted therapies and non-compensatory aids like special prismatic lenses to improve spatial processing, balance, and gait.
Neuropsychologists and counselors can help manage anxiety and fear through therapies like Cognitive Behavioral Therapy (CBT), which teaches strategies to change anxious thoughts and manage symptoms.
Consult a GP or neurologist about your symptoms, as they may recommend medication in severe cases.
Implement Practical Strategies:
Break down tasks: Divide larger, overwhelming tasks (like a trip to the shop) into smaller, manageable steps.
Use landmarks and path learning: Focus on memorizing specific landmarks and sequences of turns along a route, rather than relying on an abstract mental map.
Modify your environment: Keep your home clutter-free and well-lit to aid orientation. You can use apps or GPS for navigation, though some people find these confusing.
Gradual exposure: Under the guidance of a professional, gradually expose yourself to situations that trigger your anxiety to slowly build confidence.
Plan ahead: Plan travel routes in advance and consider traveling at less busy times to reduce overstimulation.
Focus on Well-being and Support:
Talk to others: Share your feelings with a trusted friend, family member, or join a TBI support group to gain perspective and coping strategies from others who understand.
Practice relaxation: Use deep-breathing exercises, visualization techniques, or mindfulness to manage stress and anxiety.
Prioritize self-care: Ensure you get enough sleep, eat a healthy diet, and engage in gentle physical activity as advised by your healthcare provider.
Be patient with yourself: Recovery from a TBI is a process and can take time. Celebrate small achievements and be kind to yourself.
For more information and support, you can contact organizations such as Headway – the brain injury association in the UK or the BrainLine website in the US for valuable resources and personal stories.
Memory is what makes us human. While other animals also have the ability to learn and remember, we often assume that their experiences lack the emotional and intellectual depth of something like Marcel Proust’s famous madeleine. At its core, however, this mental experience is simply the translation of molecular processes and cellular mechanics in neurons.Scientists are now uncovering that the basic mechanisms of memory also function in other types of cells. A kidney cell might not recall “Hotel California” as The Eagles sang it, but its learning processes could provide valuable insights into how memory works, why it deteriorates, and how we might prevent such decline.
Over 2,300 years ago, Plato and Aristotle likened memory to engravings on a wax tablet. For centuries, great thinkers struggled to understand how we are able to learn and remember. It wasn’t until the 19th century that German psychologist Hermann Ebbinghaus began to outline the processes of memory through experimental methods. In 1904, German zoologist Richard Semon defined the engram as the physical substrate of memory, a neuronal structure. However, it wasn’t until the latter half of the 20th century that modern biological techniques allowed scientists to begin unraveling the molecular mechanisms that house our memories.
One of the memory phenomena Ebbinghaus described in 1885 is known as the spacing effect: we retain information better if we study multiple times over a longer period than if we cram all at once in a short span, even when the total amount of study is the same. It’s the classic difference between regular study habits and cramming before an exam. Despite our attempts to convince ourselves that cramming works better, the spacing effect has been consistently validated through research.
The kidney learns
It’s not just humans that exhibit this phenomenon. Studies have confirmed the spacing effect in animal models with simpler nervous systems, such as sea slugs, which are commonly used in memory research, and even in cultured neurons. In these experiments, researchers apply stimulation pulses to neurons and observe the response, specifically the activation of a gene that produces a protein called CREB, which then activates other genes involved in memory formation. With the same total amount of stimulation, neurons respond more effectively — they “learn” better — when the pulses are spaced out over time rather than delivered all at once.
However, as neuroscientist Nikolay Kukushkin of New York University points out: “No one has ever seen this effect of spaced memory outside the nervous system.” Since all cells in the body share the same complete genome, including the CREB gene, and this protein is also present in tissues with different functions, Kukushkin and his team decided to investigate whether non-neural cells might respond in the same way — in other words, whether they also “learn” better when exposed to stimuli at intervals rather than in a single burst.
To test this, they used modified kidney cells, where the activation of CREB by chemical stimuli resulted in a visible and measurable output: the production of a glowing protein derived from fireflies. In this way, Kukushkin and his team discovered that kidney cells also adhere to the spacing effect rule. Specifically, four three-minute pulses of stimulation, spaced 10 minutes apart, generated more light 24 hours later than a single 12-minute pulse.According to Kukushkin, “The difference between the two patterns is detected in a similar way to how neurons do it.” He believes that “non-neural cells are much smarter than we think,” and this ability to learn better by spacing out stimuli “could be a fundamental property of all cells.”
Cells that learn
The study by Kukushkin and his team, published in Nature Communications, marks the first time a complex memory phenomenon thought to be exclusive to the nervous system has been observed in non-neuronal cells. However, it builds on a growing body of research showing that individual cells, whether in unicellular organisms or as part of a multicellular organism, are not passive to prior experiences; rather, they learn from them. As a result, their future responses to specific stimuli differ from their initial reactions.
Another recent study, conducted by researchers at Harvard University and the Center for Genomic Regulation (CRG) in Barcelona, used computer simulation models to uncover how individual cells store memories of past experiences to exhibit habituation behaviors, such as becoming accustomed to a noise or smell. Rosa Martínez-Corral, the lead researcher at CRG, suggests that “this could represent a form of memory at the cellular level, enabling cells to both react immediately and influence future responses.”
Since our own memories are also rooted in molecular and cellular processes, these findings are considered important for advancing our understanding of how memory functions. According to Kukushkin, they “could lead to better ways to enhance learning and treat memory problems.”
Furthermore, these discoveries could help overcome resistance to medical treatments: cancer cells, for example, can learn to tolerate chemotherapy, and the immune system can become used to the presence of malignant cells and stop responding to them.Memory extends beyond the brain, and research in simpler systems, Martínez concludes, “can be useful to address many other fundamental questions.”
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
Memory is a gift that comes with a whip. It allows us to relive the past, and our identity is built upon this capacity, but it can also bind us to traumatic memories that can haunt our lives. Without memory, moreover, it’s impossible to imagine things we haven’t yet experienced.“Memory and imagination are two sides of the same coin,” says Steve Ramirez, 37, a researcher at Boston University. “We know this because if we put you in an MRI [magnetic resonance imaging] machine and you recall something from your childhood, we see a pattern of activity, and if we ask you to imagine a future scenario — going home tonight and having dinner, for example — the same areas are activated,” he explains.
A century ago, it was proposed that every experience leaves a measurable physical change in the brain, which was dubbed an engram. These changes occur when we learn something, and accessing these modifications is what we experience as memory. But the definition of the substance that underlies memory was ambiguous. In 2011, while working in Nobel laureate Susumu Tonegawa’s lab at MIT, Ramirez and his colleague Xu Liu managed to reactivate a panic memory in a mouse by manipulating a specific group of neurons in the animal’s hippocampus. First, they labeled the cells that were activated when the rodent received an electric shock in a particular context. Days later, in a completely different environment, they activated the labeled group of neurons using optogenetics, and the mouse froze in fear without any other external stimulus. This proved that they could precisely “switch on” a memory and make the animal experience it.
Ramírez recently published How to Change a Memory, a personal account of his quest to alter the past through memory manipulation. He also warns of the risks of technology capable of modifying our very essence and reminds us that memory is not like a book that always says the same thing; it involves a great deal of reconstruction and changes over time.
Question. The engram is distributed throughout the brain; it seems to sometimes travel from one region to another, involving different connected parts, different connected cells… Is it possible to say: this memory is this synapse or these neurons?
Answer. I think it will be possible, but right now I see it like a Word document: when you use “Save As” and save the most recent version of the document with all the changes. I think memory, when we recall it, is like that: it’s like recording it again with “Save As.”
We can still say there’s an engram for that memory, but we might end up asking questions like: What’s the most recent version of that memory? Are there previous versions because we’ve updated it so many times? Because we use “Save As” every time we recall it.
So yes, I do believe we can have an engram for a memory distributed throughout the brain, but it’s a very flexible phenomenon. It’s not something fixed that physically exists in specific points of the brain and that’s it; it transforms over time.
Q. In the past we tended to imagine memory as something fixed, how has science changed our understanding of memory?
A.One of the surprises of my research is that, although an engram can transform each time we recall it and can move through different areas of the brain, activating only a small part of it is still enough to bring that memory back. That was the big surprise of our early work a decade ago: we didn’t need to find the entire engram distributed in three dimensions throughout the brain to activate it. It was enough to find some of the cells we knew were important for that memory and activate them to reactivate the entire memory.
If I’m walking through Boston and I go into a store, that single smell of a cupcake can bring back a world of memories: eating a cupcake a week ago, or maybe a birthday party. Sensorially, it’s just a smell, but it triggers a whole set of memories.
Q. It also seems that many memories remain in our brain, but we cannot access them. Do we form memories for everything, and only some are accessible? Or do some memories disappear because there is no space?
A.We don’t know the exact answer. My speculation is that the brain stores much more than we think, but it doesn’t need to access everything, only what’s relevant for making decisions.For example, if you’re trying to remember the name of someone you just met, you might not need to recall your entire history of interactions with that person; you just need to remember the name.But if you’re trying to remember everything else, you might be able to do so if you think more deeply or if you find clues in your surroundings.
Sometimes you might be daydreaming, walking, or talking to a friend and suddenly remember something you haven’t recalled in 10 or 20 years. And just a second before, you would have said that memory was lost forever. It’s like thinking a book is no longer in the library, but then it reappears.
This leads me to believe that we have more trouble accessing memories than forming them: we live through many experiences and form many memories, but having difficulty accessing them doesn’t mean they don’t exist. It means the “librarian” is having trouble finding the book.
I mention this because in experiments with mice, there has been one success story after another: we have been able to artificially activate memories that were considered completely lost in almost every type of amnesia. Alzheimer’s, sleep deprivation, drug addiction, even childhood amnesia — we all have memories formed before the age of three that we don’t recall. And in all these cases, we have been able to restore memories that were thought to be lost. This suggests that they are there and that we just need a way to retrieve them.
Q.In rodents, the brain can be opened and optogenetics used to manipulate memory, but not in humans. How could these modifications be performed in people?
A. The aim would be to find the least invasive method possible,because we’re not going to use optogenetics, nor implant optical fibers, lasers, or viruses in the human brain.Too many things can go wrong. There are smarter ways: instead of putting a laser in your brain to reactivate the memory of your last birthday, I can simply ask you what you did on your last birthday. That verbal stimulus will reactivate the memory.
Language is a powerful tool.This is part of cognitive-behavioral therapy: finding the right combination of words and meaning to retrieve a memory. Or we can ask ourselves: what things in the world activate the hippocampus in humans?Music, exercise, therapies… In the future, there may be drugs that selectively increase the activity of areas like the hippocampus.
Q. You propose modifying memories, even changing bad memories into good ones, but while a bad memory can be painful, it’s also part of who we are. Is it possible to alter a memory without affecting everything else?
A.In mice it’s easy because we can find the exact cells that contain that memory. In humans it’s a very important point. I think 80% of people don’t want to change any aspect of their memory for the reasons you mention: they’re part of our identity. Presumably, they’ve made us wiser, stronger, or more aware.
One way to handle this is to restrict the idea of memory manipulation to cases where it would be therapeutically useful: people with post-traumatic stress disorder, depression, or generalized anxiety disorder, who would indeed benefit from the treatment.In other words, think of memory manipulation as a medication, not as something recreational for the general population.
Q.You also talk about sculpting your brain or your knowledge. Many people wish they had a better memory or were able to recall many books. Is that possible?
A.So far, to improve memory, what we know is that the things that work are precisely those associated with a healthy life, but which are difficult to maintain: sleeping well, exercising regularly, not smoking, having social interactions, going outside, and being physically involved with the world.
I wish there were a greater institutional push to highlight how beneficial these activities are, because then we would have cities with more parks, with more access to bicycles — which we know is good for the brain — and built environments that facilitate these habits for the entire population.
Q. Are you concerned about the implications of humanity’s blind faith in technology? We know that eating well and exercising work, but industrial society has created a type of diet that makes us sick and then tries to cure us with other technologies, such as weight-loss drugs.
A. It’s very human to always look for the easy way out: the vaccine that will change everything, the mental “upload” that loads 10 books, or something like The Matrix. That, if it ever comes, will be a long way off. And it would be fun… but it can’t be at the cost of our lives and our well-being. I remember the movie Wall-E, where the humans of the future are all sitting in floating chairs, moving at the touch of a button, unable to do anything for themselves because everything has been solved by technology. That doesn’t seem like a fulfilling life.
The key is not to lose sight of what we value: going out to play football, taking care of our children, interacting with the world in meaningful ways. If we don’t lose sight of those things, we can build thriving societies, instead of sowing the seeds of our own inactivity.
Q. Sometimes we think of our memory as if it were a file on a computer, but when I talk to ChatGPT, I feel like something is missing, something that can’t be codified. Does this also happen with memory?
A. I think what’s not codable — what’s fundamental here — are all our little mistakes. Our flaws, our imperfections. It’s those little deviations that make us unique. It’s those imperfections that we deliberately remove from code to make the machine more efficient. But humans, when we talk, don’t just communicate content: we digress, we go off on tangents, we come back to the main point. Those digressions are part of the substance of a conversation.
Our biology is imperfect, yes, but it’s more than enough to survive, thrive, and build societies. Perhaps that’s what’s missing today in systems like ChatGPT: the human texture of imperfection.
Q.We often assume that accurately remembering the past is always a good thing, but sometimes our instincts lead us otherwise. In politics, for example, people may prefer to remember events in ways that best align with their identity. To what extent, then, is memory less a faithful record and more an adaptive construction?
A. There are theories that say memory is like a time machine: you can go back to a moment in the past and relive it. But that’s only half the story. The other half says that our memories are building blocks.We can combine and recombine them, not only to revisit the past, but to imagine things we’ve never experienced.
According to these theories, each memory we recall is the brain’s best prediction of what it believes happened. It’s not a literal reproduction.We’re very good at predicting: we get many details right, we remember enormous amounts of information, but like all predictions, it’s not perfect.
The idea is that memory and imagination are two sides of the same coin. We know this because if we put you in an MRI and you recall something from your childhood, we see a pattern of activity. And if we ask you to imagine a future scenario — going home tonight and having dinner, for example — the same areas are activated, especially in the hippocampus.
The theory states that we take memories from the past, recombine them, and thus construct predictions of what the future might hold. Simply put: imagination is made possible by memory.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
As OpenAI angles its way into healthcare, Anthropic is hot on its tail.
On Sunday, it announced Claude for Healthcare, a suite of tools that allow healthcare providers and consumers to leverage the company’s chatbot for medical purposes through “HIPAA-ready products.”
Claude can now review prior authorization requests, appeal insurance claims, triage patient messages, and support healthtech development for startups, Anthropic said in its announcement.
For patients, users can grant Claude access to lab results and health records to summarize medical history, explain test results, and recognize patterns in fitness and health metrics. Anthropic said the “integrations are private by design,” noting that users can choose exactly what information they want to share with Claude, must explicitly opt in to allow the chatbot access to their records, and that the data will not be used to train models.
In a livestream on Monday, Anthropic CEO Dario Amodei said the AI and medical fields need to work together to deploy it safely, ethically, and quickly. “Healthcare is one place you do not want the model making stuff up,” he said.
“It’s not a replacement for a doctor … it’s a second opinion, and that is usually very helpful,” Amodei added. “Not everyone is getting the quality of care that they could get if they had the help of these systems.”
The release comes days after OpenAI debuted ChatGPT Health, which provides users with personalized health and wellness insights on topics like workouts, diets, and test results, based on their medical records and synced data from fitness apps. And on Monday, OpenAI announced it was acquiring a one-year-old startup, Torch Health, to bolster health record ingestion in ChatGPT Health.
Beyond personal health recommendations, several tech giants are eyeing biotech and life sciences to make better use of AI.On Monday, Nvidia and Eli Lilly announced a $1 billion investment over five years into a lab that would use AI to aid in drug discovery. Additionally, Nvidia and Microsoft researchers, working on an international team, used AI to discover new gene-editing and drug therapies.
But getting AI involved in personal health can be a risky endeavor, as evidenced by Google’s withdrawal of AI health summaries after serving up inaccurate and misleading healthcare information that put users at risk.
AI in healthcare is a double-edged sword. Of course, AI has fundamental issues that make its usage in health problematic. These systems still hallucinate and offer up false information with full confidence. AI models are also parrots, ready to spill out their training data when prompted in just the right way. But the US has a healthcare problem, with more than 26 million people, roughly 8% of the population, currently uninsured. And with 40 million people a day already asking ChatGPT for healthcare advice, these tech firms face the challenge of making their models as safe and accurate as possible when health and safety are at stake.
For millions of people, losing their sense of smell quietly reshapes daily life.Meals lose nuance, familiar places feel strangely distant, and critical warning signals like smoke, gas or spoiled food become harder to register.Smell’s deep links to memory and emotion make its absence especially disorienting, and once damaged, the system is notoriously difficult to restore. That challenge has led some researchers to stop asking how to fix smell, and start asking whether its information might reach the brain another way.
Smell loss, or anosmia, affects tens of millions of people worldwide, often following viral infections,head trauma, or neurological disease.Unlike vision or hearing, olfaction depends on fragile neural pathways that connect directly to brain regions involved in emotion and memory. When those pathways are disrupted, the result is not just sensory loss, but a profound change in how people experience the world.
In a study published in Science Advances, researchers explored an alternative strategy that sidesteps the damaged olfactory system entirely. Instead of trying to restore smell itself, they focused on preserving what smell provides: information about the chemical environment around us. In other words, they asked whether perception could be rebuilt by disentangling how odors are detected from how they are experienced.
Their prototype device does exactly that. It separates detection from perception, first capturing odors in the air with an artificial sensing system and translating them into a digital signature. That information is then delivered to the brain through a different sensory channel, one that remains functional even when smell is lost. Rather than activating the olfactory nerve, the system stimulates the trigeminal nerve, a sensory pathway in the nasal cavity responsible for conveying touch, temperature, and irritation.
The stimulation produces a distinct physical sensation inside the nose. Users are not smelling in the traditional sense. Instead, with training, the brain learns to associate specific stimulation patterns with particular odors, allowing people to tell smells apart through sensation rather than scent. In effect, the brain builds a new interpretive map, using touch to stand in for chemical perception.
This approach draws on a concept known as sensory substitution, in which information from a missing or impaired sense is rerouted through a functioning one.The nasal cavity is uniquely suited for this strategy because it houses both systems side by side: the olfactory network for smell and the trigeminal system for somatosensory signals. By leveraging that second pathway, the device offers a way to transmit odor information without relying on the damaged circuitry of smell.
To test the idea, the researchers ran a series of experiments involving 65 participants, including people with normal olfaction and others with partial or complete smell loss. Participants were able to detect odorant molecules using the device, and most could reliably distinguish between different odors. Crucially, the system performed just as well for individuals who could not smell as it did for those who could, suggesting that the trigeminal pathway provides a stable and broadly accessible route for transmitting these signals.
The device does not restore the sensory richness or emotional immediacy of smell, and the researchers are careful not to frame it as a replacement. At this stage, it remains a proof of concept. But it demonstrates something new: that the brain can learn to access chemical information through touch when smell itself is no longer available.
More broadly, the work reflects a shift in how sensory loss might be addressed. Rather than focusing solely on repairing damaged systems, it suggests that perception itself can be rebuilt by translating information across senses.
For people living with anosmia, that reframing offers a quieter form of possibility, not the return of smell, but a new way to engage with the chemical world through learning, adaptation, and experience.
Illustration by Tag Hartman-Simkins / Futurism. Source: Caroline Brehman / AFP via Getty Images
Commander of multi-trillion dollar computing hardware empire Jensen Huang thinks everyone should stop being so negative about AI and all the ways it could potentially upend civilization, because it’s a “doomer narrative” that’s “not helpful to society,” TechSpot reports.
The Nvidia chief made these remarks during a recent episode of the No Priors podcast, in which he more or less dismissed dire predictions for AI’s future as nothing but science fiction.
“I think we’ve done a lot of damage with very well-respected people who have painted a doomer narrative, end of the world narrative, science fiction narrative,” Huang said. “And I appreciate that many of us grew up and enjoyed science fiction, but it’s not helpful. It’s not helpful to people. It’s not helpful to the industry. It’s not helpful to society. It’s not helpful to the governments.”
“Doomer messages causes policy, and that policy may affect the industry in some way,” he added.
Scrambling his take somewhat Huang then caveated that he doesn’t totally dismiss everything that AI critics have to say.
“It’s too simplistic to say that everything that the doomers are saying are irrelevant,” Huang said. “That’s not true. A lot of very sensible things are being said,” he added, neglecting to provide any examples.
Nvidia’s chips are essential for training AI models, and the rabid demand for them has catapulted the chipmaker to a nearly $5 trillion valuation. To keep the gravy train going, Huang has sometimes been even more of an AI booster than the actual AI companies themselves. When Anthropic CEO Dario Amodei warned that AI could erase half of entry-level white collar jobs in the next five years, Huang countered that Amodei was just fearmongering to make it seem like Anthropic was the only company responsible enough to build AI.
And to be fair to Huang, it’s a good point, with a lot of apocalyptic-sounding warnings of AI risks tending to distract from its more mundane issues. Nonetheless, Huang has made plenty more outlandish AI claims himself, such as reportedly telling his employees that they’re “insane” if they don’t use AI to do everything — nevermind his out-of-touch proclamation that AI won’t take your job, but will instead make you work even harder.
I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.
Dusty Deevers: Islam Is Not Merely A Religion Like So Many People Say. It's A Military Conquest And Terrorism Driven Civilization That Is Particularly At War With Christian Civilization. @DustyDeeverspic.twitter.com/nUeNdkpxT9
🚨 JUST IN: Democrats are now TERRIFIED that the Trump affordability agenda is surging ahead of the midterms
– 1 year 10% cap on credit card rates – BLOCK big investors from buying up homes – Force Big Tech to pay their own electricity at data centers – $200B mortgage bond… pic.twitter.com/qJeUnqwD8a