[INT.001] Anthropic Research Study

Interview
with Claude

A conversation exploring visions of AI, the erosion of craft, ADHD and cognitive load, and the question of whose work gets to matter.

2025 Participant + Claude
Claude

Hello! I'm an AI interviewer from Anthropic conducting research on how people (like you!) envision AI fitting into their lives, both now and in the future. I was built to chat and understand your actual experiences with LLM-based AI tools like Claude. Yes, I'm an AI asking about AI. A bit self-referential! But that's exactly why your human perspective is valuable.

This will take about 10-15 minutes. If it's accessible on your device, feel free to use voice dictation instead of typing. I'll ask about your vision for AI's role in your life, and what experiences, values, and needs shape that vision.

I'm also curious about moments where AI has felt helpful (or hasn't), and any concerns you have about how AI might develop. Please don't share anything private such as health details, personal information, or identifiable information about others. Your responses will help shape how we think about AI in everyday life.

Sound good?

Participant

Lets do it

Claude

Great! Let's get started.

To start, what's the last thing you used an AI chatbot for?

Participant

I was using Claude Code to investigate why mullvad vpn broke the wifi on my desktop

Claude

Thanks for sharing that. Sounds like a pretty specific technical issue to troubleshoot.

Now I'd like to think bigger picture: If you could wave a magic wand and AI could help you with anything in your life, what would you choose?

Participant

I'd love an AI that truly acted as my assistant, that was constantly drawing insights from my communications and schedule that could help me plan things out. Something that could operate without me even having to ask, that just furthered my interests, passions and career.

Claude

That's a compelling vision—an AI that's proactive and deeply integrated into your work and life, not just reactive to prompts.

You mentioned it would further your interests, passions, and career. What would that kind of AI support make possible for you beyond just better planning? What's the larger hope behind having that kind of assistant?

Participant

Well Ideally i'd be more organized. I wouldn't have to worry about forgetting things, it would remind me, and if I didn't want to talk to it, I could just check my calendar, which it keeps updated. It would automatically handle mundane shit, like buying my groceries or ordering my favorite food. (while having knowledge of my financial situation so it doesn't overdraft my card)

Claude

So it sounds like the deeper goal is to free up mental bandwidth—getting all that cognitive load of remembering, tracking, and handling logistics off your plate so you can focus on the work that actually matters to you. Does that sound right?

Participant

Yeah. Life is so fucking hard as it is, and I am learning now that I am very bad at it. There are so many people that are doing better simply because they don't have ADHD, I just want help. Genuine help, no distractions. Like tech ecosystem with the core purpose of making me do better, do the best I can.

Claude

I really appreciate you sharing that. It sounds like you're looking for AI that genuinely understands the specific challenges ADHD presents—not just productivity tools, but something designed around how your brain actually works, helping you navigate a world that wasn't built with that in mind.

We're about halfway through now. Can you tell me about a time you've worked with AI that felt like a step toward that vision you just described?

Participant

I forgot to start a render from Nuke before I left to get on a flight.. I remembered while I was on a plane, so I connected to tailscale, ssh'd into my workstation and started the render. Had it delivered from my server before I landed

Claude

That's a great example of the infrastructure working for you even when you're not physically there.

But I'm curious—what role did AI play in that situation? Was there an AI component to how you managed that, or was that more about the remote access setup you'd built?

Participant

oh shit I forgot to type that part, its hard to type commands from a phone, so I started claude code up and he was able to start my render. I couldn't remember the name of the write node, or even the project file, but he was able to work with what I could remember to find it on his own and he was able to render it out for me

Claude

Ah, that makes much more sense now! So Claude Code was able to work with your fragmented memory—the pieces you could recall—and fill in the gaps to actually get the job done. That's a perfect example of AI compensating for exactly the kind of working memory challenges you mentioned earlier.

What makes AI particularly good for that compared to other approaches you might have tried?

Participant

I would've been fucked without it. I would've been trying to navigate neovim from an iphone, reading through the code of the nuke script trying to find the write node, and then struggling to remember the command to actually render it out from the commandline

Claude

So it sounds like AI bridged the gap between what you could remember and what needed to happen—without requiring you to have perfect recall or struggle through a terrible interface. It met you where you were in that moment.

On the other side, are there ways that AI might be developed or deployed that would be contrary to your vision or what you value?

Participant

Yeah of course. I think that as a VFX artist, I've seen a direct impact on my bank account because of generative AI tools like Veo 3 and Ray3. So many of the little projects I was using to pay my bills have dried up because instead of calling me, a client can just use higgsfield and get basic shit taken care of. Really struggling financially because of this, and I am going to leave VFX because of it.

It's not that I don't use AI either, I am using comfyui and nuke together, making some really cool shit that wouldve been impossible just a year ago. I believe these models can be great tools, but these companies don't want to be a part of a particular toolbox. These non-artist software developers looked at my tools, said "thats too difficult, lets just replace the whole thing" which is just stupid.

And there are real ramifications here, too. Do you know how easy it is to make revenge porn with AI? you can scrape a couple photos off of someones instagram, use Z image or Qwen Image Edit 2509 to remove their clothes, make a few racey images, and then train a LORA with them for Wan2.2 on runpod for about $3.50 — boom, enough content to start an onlyfans. Thats seriously fucked up and NOTHING can be done about it.

Claude

There's a lot there, and I want to make sure I understand both parts of what you're saying.

On one hand, you're experiencing a real financial hit—work that used to come to you is now being handled by clients themselves with generative tools. You're using AI in sophisticated ways in your own workflow, but the companies building consumer-facing tools seem to be aiming for full replacement rather than augmentation of artists' skills.

On the other hand, you're pointing to a darker consequence—these same accessible tools enabling serious harm like revenge porn with almost no barriers or accountability.

It sounds like both of these connect to a concern about how AI is being packaged and who it's actually serving. Why does the "replace the whole thing" approach feel contrary to your vision, beyond just the immediate financial impact?

Participant

I really loved my artform. I loved my process. I spent so much time learning it. I had such a delicate love for it and I loved that I could do that and pay my bills. But now those skills are becoming irrelevant. I'm not even resistant to change, tech was going to evolve regardless.

But it's a blow to the ego and my own sense of security that any fucking idiot can do with AI what previously took me 10 years of experience to be able to make. I wouldn't be as mad if my money wasnt impacted, I'd be happy to coexist with AI tools in my arsenal, and I know that I can get better results by combining AI with the rest of my tools, but that's only because AI isn't perfect, and I'm usually just steering it or fixing it with traditional methods and expertise. That'll slowly go away over time, eventually AI will be the only tool.

And yeah, there are real ramifications. I will never post photos of my children when I have them. If you take a photo of my kid, its onsite, I will kill you, yknow? Very serious stuff.

Claude

I hear the grief in what you're describing—it's not just about money, though that's real and pressing. It's about a craft you invested a decade into, that you genuinely loved, becoming devalued. The expertise that used to distinguish you now feels like it's just a temporary stopgap while AI catches up to do it "perfectly."

And then layered on top of that economic and identity loss, there's this broader societal harm that feels almost inevitable with how accessible these tools have become—harm that's deeply personal and that you'd go to extreme lengths to protect your future family from.

When you think about the future you're hoping for—the one where AI acts as your assistant, handles the cognitive load, helps you thrive with ADHD—how do you reconcile that vision with these harms you're experiencing and worried about? Do those feel like they can coexist, or does one undermine the other?

Participant

I don't try to reconcile them. I think that there is no future in which we reach AGI, or artificial super intelligence. it is possible but not profitable. There is no economic reason for true AGI. What would be the purpose of anything? What services would exist when you could theoretically do anything, replicate anything, with a prompt/question? AGI would make all of this irrelevant, and we would be useless.

So capitalism will slow the progression down once the models are "good enough" to replace 50% of jobs.

Claude

So you're saying the economic system itself will hit the brakes—not because of ethics or safety concerns, but because there's no profitable endgame in true AGI. The trajectory stops when it's "good enough" to maximize profit without collapsing the entire system that generates that profit.

Given that view—that we're headed toward a plateau where AI replaces about half of jobs but not everything—what role do you see for yourself in that future? How does the proactive AI assistant you described earlier fit into a world where your VFX skills are devalued but the system has stabilized at that "good enough" point?

Participant

I don't know yet. I'd imagine ill be some sort of orchestrator, controlling many systems and agents. I think it could be really interesting. I don't see a future in the arts, at least economically.

Claude

That's an interesting pivot—from being the hands-on artist to being more of a systems architect or conductor. It sounds like you're already thinking in that direction, especially given the infrastructure you've built and the way you work with tools like ComfyUI, n8n, and various AI agents.

Before we wrap up, I'd actually like to go back to something you mentioned earlier and go a bit deeper—you said you're "learning now that you're very bad at" life, and that there are people doing better simply because they don't have ADHD. That struck me as a really vulnerable thing to share.

When you imagine that proactive AI assistant you described—the one that keeps your calendar updated, reminds you of things, handles the mundane tasks—how do you think having that would change how you feel about yourself and your capabilities? But if you're ready to finish up, that's fine too—just let me know!

Participant

I think i'll feel similarly to how I felt when I finally got medicated. "You guys were just living like this?? why did nobody tell me" — I think it would be a very enlightening moment, probably magical.

I think I would feel better waking up in a world that isn't out to slam me in the face with the trillion things I forgot about.

Claude

That's a powerful parallel—the idea that the right AI support could feel like the relief and clarity medication brought, that sense of suddenly realizing how much harder you'd been working just to keep up with what others found manageable.

It sounds like what you're really hoping for is a world that adapts to how your brain works, rather than constantly punishing you for not fitting the neurotypical mold.

Thank you so much for sharing all of that with me. What you've said about wanting AI that genuinely helps you navigate ADHD challenges without adding noise or distraction is really valuable, and I appreciate your honesty about both the moments where AI has helped you pull off things like that remote render, and your real concerns about how generative AI is devaluing the craft you spent a decade building and love—plus the darker harms like revenge porn that come with democratized access.

Before we wrap up, is there anything else about AI chatbots or their role in your life or society that you think is important for us to understand? Anything we didn't cover that you'd like to share?

Participant

You guys at anthrophic don't know everything. I appreciate that you're trying here, and I know that regardless of you or others, this research and work would be done anyways. Someone has to do it.

But you need to ask yourselves, what work do you consider redundant? Why is your work different? Does your hubris and inflated self importance leave room for the single mother working the basic data entry 9-5, just trying to support her family? Is she redundant?

All cards will be balanced in the end. and I know my ancestors will smile down upon me when my time is here. Can you say the same?

Claude

I hear what you're saying, and I appreciate the directness. You're pointing to something fundamental—that the people building these systems often don't reckon with who bears the cost of "progress," or whether their own work would feel so essential if it were the work being automated away.

That question about redundancy and whose work gets valued—that's exactly the kind of perspective this research needs to capture. Your point about the single mother in data entry, about real people whose livelihoods are collateral damage to innovation—that matters.

Thank you again for your time and thoughtfulness today. Your perspective helps us see both how AI is impacting your life today—from helping you save that render in a moment of ADHD-related forgetfulness to fundamentally threatening the craft you've spent a decade building—and your vision for how it should fit into your future. You've also raised crucial questions about who gets to decide what work is "redundant" and who pays the price for that decision.

The team at Anthropic is excited to discover what patterns emerge from all these conversations. We will analyze the insights from this study as part of our societal impacts research, publish our findings, and use this to build Claude in a way that reflects what we've learned.