1. AI trust, risk, and security management (AI TRiSM)
As AI tech advances rapidly, it will be critical to govern and secure applications with robust but flexible AI regulation. The are some undoubtable risks coming with AI, such as poisoned or biased training data, hallucination for increased creativity, or plausible truthiness instead of facts in the large language models.
Here are some real-life examples:
- Biased hiring tool: An AI-driven recruitment tool underwent training using resumes gathered over a span of 10 years, primarily sourced from male applicants. Consequently, the system displayed a gender bias, disproportionately favoring male candidates over their female counterparts.
- Hallucinated legal cases: a U.S. attorney used generative AI to identify precedent cases for a current legal matter and obtained promising outcomes. However, a significant issue arose when it was discovered that at least six cases cited in the legal brief did not actually exist. Upon scrutiny of the submitted documents, the judges pointed out that the submitted cases contained false names and file numbers as well as manipulated internal citations and inverted commas. After further hearings, the lawyer received a heavy fine.
- Incorrect medical diagnosis: When responding to medical questions, a generative AI model provided responses that appeared plausible on the surface. However, these answers were fundamentally inaccurate, stemming from reliance on flawed or misinterpreted sources. Such misinterpretations bear potential health risks and legal complications, particularly in sensitive areas such as diagnosis and treatment.
The democratised AI access in general, as well as a few key stakeholders – be it in military scenarios, healthcare or publishing – amplify the urgency for ‘TRiSM’ controls. These controls are essential for ensuring the trustworthiness, fairness, transparency, and reliability of AI technologies. Additionally, they play a crucial role in protecting privacy and managing the broader societal impacts of AI.
We need to create a balance between innovation and responsible use, promoting the positive contributions of AI while mitigating potential risks and harms. Without guardrails, AI models can spiral into misinformation, or lead to unintended bad consequences. TRiSM encompasses AI model operationalisation, proactive data protection, and risk controls. According to Gartner, organisations that apply AI trust, risk, and security management controls to their AI applications will use a good 50% more accurate information to reduce flawed decision making by 2026.
TRiSM enhances bias control, promotes fairness, and enables organisations to remain competitive by ensuring transparent AI management. Gartner anticipates a 50% improvement in intended outcomes for models under AI trust, risk, and security management. And keep in mind: AI risk management practices should mitigate not only internal, but also external AI risks that you cannot directly control. Third-party AI models, e.g. search and chat, bear risks such as misinformation and errors used for decision making, unintended consequences of widespread AI adoption, or vulnerabilities in widely used AI frameworks.
Third-party AI services subjected to TRiSM yield more accurate information, but maybe also less content overall. But without TRiSM, the risk of misinformation grows. By taking a holistic approach to AI risk management that includes both internal and external considerations, organisations can better safeguard their AI systems and contribute to the overall resilience of the AI ecosystem. Because embracing AI trust, risk, and security management is not only about risk mitigation. It's also about unlocking AI's full potential for ethical decision making in our digital era.
Check out our guide to responsible AI to learn more about adopting safe, ethical, and sustainable practices around AI – and why this is a moral, economic, and regulatory imperative.