AI Will Never Give You Exactly What You Want
You have an idea for an app. Maybe something to manage your family World Cup bracket, or a wizard to help you plan for your eventual death. You ask AI to help, and are suddenly swimming in code that is not just good, it's great. Like, a live app, with accordions and persistent memory and a polished, if bland, user interface.
With a few hours of prompting, you're down to tweaking type styles and border radii, well on course to realize your vision.
But you can't quite get there. Your AI helper has created so many delightful design decisions and artifacts throughout this process (those Edward Gorey illustrations!). It is so capable, so collaborative, so complimentary about what you're doing. If it could do that, surely it could do this?
If you can just fix this one thing, we can stay together.
There's no blame to apportion. The AI agent, as designed, is doing its very best.
At some point, it comes down to this: can you accept the flaws and still be happy? Or do you refuse to settle? What exactly is good enough?