How can we add some productive humility to these interfaces? How can we make systems that are smart enough to know when they’re not smart enough?
I’m not sure that I have answers just yet, but I believe I have some useful questions. In my work, I’ve been asking myself these questions as I craft and evaluate interfaces for bots and recommendation systems.
When should we sacrifice speed for accuracy?
How might we convey uncertainty or ambiguity?
How might we identify hostile information zones?
How might we provide the answer’s context?
How might we adapt to speech and other low-resolution interfaces?
Josh Clark, “Systems Smart Enough To Know They’re Not Smart Enough”