Shawn Shen believes that AI might want to bear in mind what it sees in an effort to succeed within the bodily world. Shen’s firm Memories.ai is utilizing Nvidia AI instruments to construct the infrastructure for wearables and robotics to have the ability to bear in mind and recall visible reminiscences.
Recollections.ai introduced a collaboration with semiconductor large Nvidia at its GTC convention on Monday. Via this partnership, Recollections.ai makes use of Nvidia’s Cosmos-Cause 2, a reasoning imaginative and prescient language mannequin, and Nvidia Metropolis, an utility for video search and summarization, to proceed to develop its visible reminiscence know-how.
Shen (pictured above left) advised TechCrunch that he and his co-founder and CTO, Ben Zhou (pictured above proper), bought the thought for the corporate whereas constructing the AI system behind Meta’s Ray-Ban glasses. Constructing the AI glasses bought them excited about how folks would truly use the tech in actual life if customers couldn’t recall the video knowledge they had been recording.
They regarded round to see if they may discover anybody already constructing that kind of visible reminiscence resolution for AI. After they couldn’t, they determined to spin out of Meta and construct it themselves.
“AI is already doing rather well within the digital world. What concerning the bodily world?” Shen stated. “AI wearables, robotics want reminiscences as nicely. … Finally, you want AI to have visible reminiscences. We imagine in that future.”
The power for AI methods to recollect, usually, is comparatively new. OpenAI up to date ChatGPT to begin to bear in mind previous chats in 2024 and fine-tuned that characteristic in 2025. Elon Musk’s xAI and Google Gemini have additionally launched their very own reminiscence instruments prior to now two years.
However these developments have largely targeted on text-based reminiscence, Shen stated. Textual content-based reminiscence is way more structured and simpler to index however isn’t as useful for bodily AI functions that largely work together with the world by means of sight and visuals.
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
Recollections.ai was launched in 2024 and has raised $16 million to date, by means of an $8 million seed spherical in July 2025 and an $8 million extension. The spherical was led by Susa Ventures and included Seedcamp, Fusion Fund, and Crane Enterprise Companions, amongst others.
Shen stated efficiently constructing this visible reminiscence layer required two issues: constructing the infrastructure wanted to embed and index movies into an information format that may be saved and recalled, and capturing the info wanted to coach the mannequin to do exactly that.
The corporate launched its large visual memory model (LVMM) in July 2025. Shen stated it may very well be in comparison with a smaller model of Gemini Embedding 2, a multimodal indexing and retrieving mannequin, that was launched earlier this month.
For knowledge assortment, the corporate created LUCI, a {hardware} machine worn by the corporate’s “knowledge collectors” that information video used to coach the mannequin. Shen stated they don’t plan to turn into a {hardware} firm, nor promote these units, however, moderately, that they constructed their very own as a result of they weren’t happy with off-the-shelf video recorders that targeted on high-definition and battery-eating video codecs.
The corporate launched the second era of this LVMM and signed a partnership with Qualcomm to run on Qualcomm’s processors beginning later this 12 months.
Recollections.ai can also be working with among the giant wearable firms already, Shen stated, however declined to reveal which of them. Regardless of some demand now, Shen sees even larger alternatives in wearables and robotics but to return.
“When it comes to commercialization, we’re extra targeted on the mannequin and the infrastructure, as a result of finally we predict the wearables and robotics market will come, but it surely’s most likely simply not now,” Shen stated.
