The human layer...
The human-centred AI or ‘human layer’ asks some pretty fundamental questions about whatever AI use case is being ideated: is this going to be good for people? What will the consequences be? What potential risks are there?
The human layer is really about thinking in terms broader than an engineer typically might. It’s about asking should we even be doing this? And, if we find that there could be some risk of harm, what can we do to mitigate things?
That doesn’t make every project an automatic no-go. You can find risks with almost any product if it's being misused, or if it's being fed the wrong data. But it’s your obligation to mitigate those risks. You just have to look at the stakeholders as well as the people that are affected by the AI in question.
For instance, if your AI product is deciding who should be approved for a credit card, the bank is the stakeholder, but the end client is the one that's being affected by it. So you have to consider every human facet of your project and be respectful towards all of them.
This human element stretches far beyond the initial stages of an AI project, however. It also means adding a layer of human oversight that can supervise, spot errors, correct things, and take any additional steps to stop things from going off course again. You need to have a effective mechanism in place in case things start going south.