The Intersection of Tech & Humanity: Ethics, AI, and the Human Experience.

Written by
Carmen Leiser
Published on
March 4, 2026

Our theme this March

Whether you believe, like Matt Shumer, that “Something Big Is Happening”, or you’re more skeptical of the sweeping claims, one thing is clear: we are living through a genuine inflection point in how technology intersects with work, identity, and what it means to be human. This month at Movi, we’re exploring that question together. Not from a place of panic, and not from complacency either, but with the curiosity, rigor, and care that this community does best. We’re exploring where AI genuinely changes things, where human judgment and presence remain irreplaceable, and what it looks like to navigate this moment with our values intact. Because if there’s one thing an inflection point calls for, it’s exactly the kind of collective intelligence you all bring to this room.

Shumer’s post went viral for a reason. The February 2020 analogy is good, that specific dread of knowing something is coming but not yet feeling it. And he’s right that programmers are getting hit first. But he’s making a move that tech people make constantly: assuming that what’s true for their world is true for everyone’s. It isn’t.

AI is remarkably good at programming because the conditions are almost ideal. Decades of code live on the public internet. And crucially, there’s a built-in feedback loop, code either runs or it doesn’t. The quality check is automatic. That combination of abundant public training data plus instant verifiability is rarer than people think.

The Data AI Doesn’t Have

Long ago, I trained as a therapist. One thing that experience made viscerally clear: so much of what a skilled clinician perceives never gets written down. The slight shift in someone’s tone, the pause before they answer, the way their energy changes when a certain topic comes up. A good therapist is reading a constant stream of signals that exist nowhere in any dataset - yet.

And even the parts that do get recorded, like session notes, transcripts, etc. are largely locked away behind privacy walls, for obvious and good reasons. AI isn’t training on therapy sessions. It’s training on Reddit and Wikipedia and YouTube. That’s not a criticism, it’s just a fact about what the technology actually knows, and what it doesn’t.

Therapy is an extreme example, but the underlying point applies across a huge range of work (managing, teaching, parenting, hospitality, medicine,…). Any field where the core knowledge is embodied, relational, or locked behind compliance requirements is in a fundamentally different position than software engineering. The “everyone is next” framing flattens distinctions that actually matter a lot.

What Gets More Valuable, Not Less

Here’s what I keep returning to: AI can only work with what’s been captured. Which means everything that hasn’t been, doesn’t get cheaper. It gets more special!

As AI gets better at producing the polished and the optimized, the things it can’t replicate become the differentiator. Presence, ethical judgment, the ability to read a room, etc. These aren’t soft skills anymore, they’re the point. Which is a big part of why Movi exists, and why this month feels like the right moment to explore it.

So no, I don’t think we need to panic. But you shouldn’t be complacent either. If you’re in tech, learn to work with these tools now (this is, as Ethan Mollick puts it, the worst AI you’ll ever use). If you’re not in tech, the better question isn’t “will AI replace me” but “how will it change my work, and what does that mean for where I invest my energy?”

I’m genuinely curious where each of you is in this. Excited, anxious, skeptical, already adapting? That’s (part of) the conversation we want to have with you this month.

Subscribe to newsletter

Subscribe to our newsletter to hear what we're up to as it happens.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.