The project is fundamentally an extension of myself.
Throughout my entire journey with photography, the most intimate moments have been those when I had the chance to engage in deeper conversations about a photograph with people. They share what they feel. I keep a note of it. I analyse why someone feels a certain way, and someone else feels the opposite at times.
What's the invisible punctum in that photo? Derived from the idea of "punctum" by Roland Barthes, I try to expand the concept by trying to find the "invisible punctum" in the photos. "Punctum", as Barthes says, is that one nuance in the image that punches. The part that hits you deep. The thing that makes you feel that "ahh" moment.
This project is a reflection of the overwhelming number of photographs we capture these days. It's a critical analysis of how we comfortably upload everything to the cloud and how we allow AI to deeply analyse it, impacting our daily life by selectively showing us back what it finds.
This project is also an acknowledgement of the way we have become obsessed with taking photographs to the point where we don't even realise when we overdo it, and how it all goes to the cloud, where AI analyses them and tries to find meaning in them. Then it shows them back to us with more stories, meaning through Instagram. Recaps, Reels, iCloud memories, and so many other recall factors.
This entire process of recalling through AI is basically after AI tries to find the meaning in them, which is fundamentally flawed, as shown in this research. And that means the way memories appear in phone applications is too fabricated, shallow, and constructed, lacking that much-needed human touch.
Aren't we supposed to remind ourselves of what's important to us and what we want to recall, instead of an AI deciding what we should remember?
This project may not be an AI analysis at all.
It's me telling myself again and again that it's okay to hold that dark, blurry photo I captured one dawn, close to my heart.
The moment I took that photo that night, perhaps in the dawn, after crying all night, by the sea shore, I knew that this photograph would mean more to me than my entire library of 100,000 photos!
People might see that photo and feel nothing, but it means a world to me.
Later, whenever I had the chance, I would ask everyone around me if they could share their thoughts about the image. After hundreds of responses to the image, I realised that they couldn't decipher the true feelings behind it. But then are they wrong? What if their feelings hold valid as well?
I have often wondered what AI would feel about it.
Maybe it's just another human (made out of a million human ideas and memories).
Can AI be trained to feel that image for real? Maybe not.
The meaning evolves with every new observer.
It's the observer who creates the meaning and not the maker.
The maker creates the stimulus.
This project is an attempt to acknowledge the futility of the ego the maker carries in the process of creation. It's an attempt to humble the ego of meaning-making, specifically the intellectualisation of feelings. AI tries hard, but fails every time.
You can't automate becoming.
What you feel when you see a photo is an act of becoming.
The meaning emerges at the moment you view the image.
And that feeling depends on everything you are as a human being. The place you are born, your culture, your upbringing, your emotional state, and your political stance all have a strong influence on what you feel.
I collect real human responses in this project using photos from my personal archive and professional projects, and try to figure out what people feel when they see them.
You can write your feelings too.
You can also record a voice note about it.
You can check out responses by other people and listen to their voice memos on the images. You can do a comparative analysis of human reactions. You can check what an AI felt about the image. Then you see a comparative analysis between the two. You can choose either a cloud model or a local model. You can check a sample response if any of the models fail due to a technical glitch.
In the last tab, you see a beautiful visualisation of the most commented images. The images which people selected the most number of times to comment on appear as the largest. You see them as floating bubbles, along with the human responses that occur around them. There is also an option to switch between light and dark modes.
All photos in this project are mine. Kindly refrain from using it anywhere without my permission. For media mentions, press releases, or educational purposes, please get in touch with me. I would be happy to participate in a detailed session that focuses on the artistic and sociological aspects of this research work.
I will be presenting this research for the first time publicly at the BSA Annual Conference, organised by the British Sociological Association, at the University of Manchester in the UK in April 2026.
Abodid Sahoo
Written on 8th Jan 2026 at 01:39 AM