Hacker News new | past | comments | ask | show | jobs | submit login
The AI skeptic's guide to AI collaboration (hils.substack.com)
23 points by swyx 4 hours ago | hide | past | favorite | 21 comments





> It's like having a thoughtful and impossibly fast colleague who's always available to help me develop and sharpen my ideas.

More like an absolute bumbling idiot of a colleague that you have to explain things over and over again and can’t ever trust to get anything right.


> More like an absolute bumbling idiot

sam altman said AI would "clone his brain" by 2026. He is wrong, it already has.

I've listened to him speak many times and thats an accurate description. seriously, has he ever said even one interesting thing ?


Yes, and I already got those

Now, running with scissors

I don’t get the “just spend more time with AI” argument. Its not a skill, stop trying to make it one. Why should I spend 30 days with it? The only thing that would accomplish is taking the soul and joy out of everything. Everyone just sound like they don’t like coding.

Of course using AIs is a skill, just like e.g. effectively writing search queries used to be a skill back in the day. When I actually tried getting something done with AI models for the first time, rather than just kicking the tires with the implicit motivation of showing how useless they were, it took way more iterations to get a satisfactory output at the start than a week later.

The kinds of things you'll learn are:

- What's even worth asking for? What categories of requests just won't work, what scope is too large, what kinds of things are going to just be easier to do yourself?

- Just how do you phrase the request, what kind of constraints should you give up front, what kind of things do you need to tell it that should be self-evident but aren't?

- How do you deal with sub-optimal output? Whe do you fix it yourself, when do you get the AI to iterate on it, when do you just throw out the entire sessions and start afresh?

The only way for it to not be a skill would be if how you use an AI either did not matter for the quality output, or if getting better results just a natural talent some people have and some don't. Both of those seem like pretty unrealistic ideas.

I think there's probably a discussion to be had about how deep or transferrable the skill is, but your opening gambit of "it's not a skill, stop trying to make it one" is not a productive starting point for such a discussion.


> Its not a skill, stop trying to make it one.

Using it efficiently is absolutely a skill. Just like google-fu is a skill. Or reading fast / skimming is a skill. Or like working with others is a skill. And so on and so on.


I agree, it is absolutely not a skill. LLMs are a black box and the models keep changing under you, and their output can change if you try the exact same input more than once.

People claiming it's a skill should read up on experiments on behavior adaptation to stochastic rewards. Subjects develop elaborate "rain dances" in the belief that they can influence the outcome. Not unlike sports fans superstitions.


It's a skill. The more time (and intentional practice) you invest in it the better you'll get at it.

So strange, I haven't had this much fun at coding in a long time. It's amazing.

Why is it strange? Different people enjoy different things. Seems normal to me

It's surprising to me that these things are so hard to use well. If you asked me before ChatGPT to guess how the user experience with this kind of technology would be, I would have said I expect it to be as intuitive as talking, almost no friction. I think this is a natural expectation that, when violated, turn a lot of people off.

> I would have said I expect it to be as intuitive as talking, almost no friction

There is so much friction when you try to do anything technical by talking to someone that don't know you, you have to know each other extremely well for there to be no friction.

This is why people prefer communicating in pseudo code rather than natural language when discussing programming, its really hard to describe what you want in words.


> The problem is that most people misunderstand what AI is good at. They talk about it "taking over" writing, planning, and problem-solving—as if these were simple, mechanical tasks that could be fully automated without any loss in quality.

Because that’s the claim of all the AI companies. Right next to the claim that AGI is in reach.

The question is if all use AI will all text become too similar.


Talking to some younger colleagues over drinks the other evening they showed me their Instagram feed. It's all AI slop. Machine generated jokes.

For all the talk about jobs and art, LLMs seem to love shitposting.


> illustrative 30-day calendar of exercises

That there is such a calendar for using ChatGPT in the style of topics like "how to eat healthy", "how to stay fit" or "how to be more confident" shows to me more than anything what impact AI has on our society.


It shows the desperation of the bubble, the retarded miracles must sell to keep the "soonstartrek" delusion vector going, the customers be damned.

> The people who are most skeptical of AI are often those with the highest standards for quality.

From Anger to Denial to Bargaining. And we are starting out with flattery. Masterful gambit!

Instead of participating in slop coding (sorry, "AI collaboration"), I think I'll just wait for the author and their ilk to make their way across Depression and Acceptance.


The problem is that current AI companies are ignoring domain expertise in favor of overly generalist models. "Meh, we have AGI planned for tomorrow anyway, it will sort everything out by itself. Somehow." This is understandable (see the "Bitter lesson"), but particular knowledge domains are so deep that you can't just ignore them, you'll produce a metric ton of crap if you stay oblivious. No matter how advanced your model is, without consulting with actual experts for fundamentals it will always miss the mark and look off.

Anthropic used to do this with Claude's character until Claude 3, but then dropped it. OAI's image generation is consistently ahead in prompt understanding and abstraction, but they famously don't give a flying turd about nuances. Current models are produced by ML nerds that are handwaving the complexity away, not by experts in what they're trying to solve. If they want it to be usable now, they need to listen to this kind of people [1]. But I don't think they really care.

[1] https://yosefk.com/blog/the-state-of-ai-for-hand-drawn-anima...


> The only thing that I have seen convince people (and it always does)

...when anyone starts talking in universals like this, they're usually deep in some hype cycle.

This is a problematic approach that many people take; they posit that:

1) AI is fundamentally transformative.

2) People who don't acknowledge that simply haven't tried it.

However, I posit that:

3) People who think that haven't actually used it a serious capacity or are deliberately misrepresenting things.

The problem is that:

> In reality, I go back and forth with AI constantly—sometimes dozens of times on a single piece of work. I refine, iterate, and improve each part through ongoing dialogue. It's like having a thoughtful and impossibly fast colleague who's always available to help me develop and sharpen my ideas.

...is only true for trivial problems.

The author calls this out, saying:

> It won't excel at consistently citing specific papers, building codes, or case law correctly. (Advanced techniques exist for these tasks, but they're not worth learning when you're just starting out. For now, consider them out of scope.)

...but, this is really the heart of everything.

What are those advanced techniques? Seriously, after 30 days of using AI if all you're doing is:

> Prepare for challenging conversations by using ChatGPT to simulate potential scenarios, helping you approach interpersonal dynamics with empathy and grace.

Then what the absolute heck are you doing.

Stop gaslighting everyone.

Those 'advanced techniques' are all anyone cares about, because they are the things that are hard, and don't work.

In reality, it doesn't matter how much time you spend learning; the technology is fundamentally limited. It can't do some things.

Spending time learning how to do trivial things will never enable you to do hard things.

It's not missing the 'human touch'.

It's the crazy hallucinations, invalid logic, failure to do as told, flat out incorrect information or citations, inability to perform a task (eg. as an agent) without messing some other thing up.

There are a few techniques that can help you have an effective workflow; but seriously, if you're a skeptic about AI, spending a month doing trivial stuff like asking for '10 ideas about X' is an insult to your intelligence and doesn't address any of the concerns that, I would argue, skeptics and real people actually have about AI.


Let’s take vim and emacs, or bash. People do not spend years on them only for pleasure or fun, it’s because they’re trying to eliminate tedious aspects of their previous workflows.

That’s the function of a tool. To help do something in a more relaxed manner. Learning to use it can take some time, but the acquired proficiency will compensate for that.

General public LLMs have been there for two years, and still today, there are no concrete uses cases that can have the definition of tools. It’s trust me bro! and warnings in small print.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: