We've all seen it - ChatGPT genuinely solving coding puzzles. Clearly,
clearly, that's a long way from building MVP products, designing new programming languages or writing "Hello World" in Haskell. But it's also a long way since even GPT-3, never mind status quo 10 years ago. It would be cool to discuss what a future looks like where "human operators" of programming are competing against a machine. I don't think it is imminent, but equally I think it's less imminent than I did a week ago.
Some threads that come to mind:
- Are these language models better than current offshore outsourced coders? These can code too, sort of, and yet they don't threaten the software industry (much).
- What would SEs do if any layperson can say "hey AI slave, write me a program that..."? What would we, literally, do? Are there other, undersaturated professions we'd go into, where analytical thinking is required? Could we, ironically, wake up in a future where thinking skills are taken over by machine, and it's other skills - visual, physical labour, fine motor skills - that remain unautomated?
- Are we even the first ones in the firing line? Clearly, for now AI progress is mostly in text-based professions; we haven't seen a GPT equivalent for video comprehension, for example. Are lawyers at risk? Writers?
- What can SEs do, realistically, to protect themselves? Putting the genie back in the bottle is not, as discussed many times in other threads, an option.
- Or is it all bogus (with justification), and we're fine?
No doubt ChatGPT will chip in...
Even the test cases that it generates can be deceptive. They look convincing but upon closer inspection, sometimes are just not really testing anything.
but in the end after hours and hours of trying to coax the AI, it was unable to do what I wanted, build a b-tree in Python. it built a binary tree just fine, but trying to have it generalize to a b-tree was a problem. Overall I couldn't recommend this to anyone without a strong CS background. It introduces far too many subtle bugs in code that are are almost impossible to review because the code it produces is so convincing that you go "hmm maybe it knows what it is talking about" but in the end you have no idea what you should trust.