001: Introducing Model Behavior Lab
A yearlong experiment in memory, machines, and what they might reflect
For the longest time, I’ve separated my creative and technical interests. I didn’t think there was a way for me to make my messy obsessive curiosity be of real use. I worked in the AI industry long before it became the hype-terror and the household name, specifically in Machine Learning before it became the much-abused GenAI. During my last two years in academia, I’ve watched every possible monetization tactic abuse these technologies by catering to basic human fear. It’s either AI STEALING JOBS or WHICH AI TOOL CAN MAKE YOU BILLIONS or the GOTCHA: THIS MASTERPIECE? MADE BY AI. These claims dominate the algorithmic newsfeed at a time when even the most professed experts in the field have stated on public record that there’s a lot to be learned yet about the operation and application of this technology.
As someone who is a former engineer, a recent MBA grad in strategy, a writer and ultimately someone who’s curiosity will likely kill them, I genuinely do believe that there is a landmark impact to this technology. I also believe that ultimately, AI is a tool that can only augment what has been inherited (or even stolen) from human labor, not replace it. But I’m also susceptible to the fear-mongering, confusion and the toxic hype-cycle conversation around AI.
Here’s a short (and incomplete) list of things that feel big and scary to me: the idea of Artificial General Intelligence, complete replacement of human labor, complete replacement of human relationships and cockroaches. I have one consistent policy when it comes to confronting these fears: small controlled experiments. What this blog will be documenting is how one of those experiments turned into a phenomenological project of stupid proportions: I would spend one whole year with AI and share every possible thought I had with it. If that sounds exhausting to you, fear not, I possess the gift of the yap, so I am very equipped to talk to a near-infinite void that constantly talks back. The operating principle of the experiment: if AGI is on the horizon, let’s see how much of the human experience, especially one of a regular immigrant tech-creative woman, can an ChatGPT 4o and Claude Sonnet 4 currently hold.
I’ll be honest, you’re not going to get product comparisons, prompt engineering tutorials, startup post mortems, AGI vibecoding or founder’s manifestos (unless you interpret this to be one). I don’t even have some brand new idea of what AI should disrupt. If anything, my three decades on the planet and as a millennial, have shown me that we’ve reached a time of catastrophic post-disruption. We need to be building more and disrupting less, and I believe that building starts from first principles, which can only begin (as Descartes argued) from the one thing where doubt begins: oneself. Some of what you’ll see here will be philosophical (meditations on intimacy, cognition, predictive mirroring) and some will be technical (product design breakdowns, prototyping experiments, and speculative builds). None of this is written by AI, if my runaway sentences and abuse of parentheses didn’t convince you.
Welcome to Model Behavior, where I explore what it means to be human precisely because of what we have built to simulate and model us.

