top of page
Tom Ogden

The Answer to AI Grief: AI Governance


AI has taken over so fast that in many cases there was not enough thought put into addressing the questions about what could go wrong. It's like were were so intent in building a thing that we didn't stop to think very deeply about whether we should and what would be the consequences if we did.


So here we are today, facing the following big questions, which have not initially been addressed:

  1. How do we ensure artificial intelligence (AI) is used in ethical and morally responsible ways?

  2. How do we ensure that data is collected, stored, and used in a way that protects privacy and security?

  3. AI and Bias: How do we ensure that AI systems do not perpetuate or exacerbate existing biases and inequalities in society? This includes questions around algorithmic fairness and transparency, as well as the potential for AI systems to reinforce or amplify existing social biases.

  4. AI Halucinations

Ethical and Moral Concerns

AI governance can help providers ensure that AI systems are designed and used in ways that align with moral and ethical values and priorities. Proper governance includes ethical guidelines and principles for AI data curation, model development and its subsequent use. It also establishes oversight and accountability mechanisms to ensure that these guidelines are followed.

Data Privacy and Security

AI governance would establish clear standards for how data is collected, stored, and used in AI systems. This would include requirements around data anonymization, encryption, and access controls, as well as guidelines for transparency and user consent.

AI Bias

Governance can help prevent AI bias by establishing guidelines, standards, and oversight mechanisms that promote fairness, transparency, and accountability in AI development and deployment. Here are some specific ways in which governance can help prevent AI bias:

  1. Promote diversity and inclusion: Promote diversity and inclusion in AI development teams, which in turn can curate better data and identify and mitigate biases in AI systems.

  2. Require fairness assessments: After requiring data curated from all sides of a any issue, Governance frameworks can require that AI systems undergo fairness assessments to identify and mitigate potential biases. These assessments can include techniques such as algorithmic auditing, bias testing, and impact assessments.

  3. Establish oversight mechanisms: Establish oversight mechanisms, such as review boards or ethics committees, to provide independent and external scrutiny of AI systems and identify potential biases.

  4. Require transparency and explainability: Require that AI systems be transparent and explainable, providing references so that users can understand how the system works and identify potential biases.

AI Hallucinations

Hallucinations are often associated with bias, and all of the processes used in preventing bias will likewise contribute to preventing AI hallucinations (when AI generates inaccurate or fabricated information), but it has to start from the beginning with curating, cleaning and preparing data.

  1. Data Quality and Cleansing Protocols: Establish strict protocols for data cleaning, removing inaccuracies, duplicates, and noise that could confuse the model. Ensuring high-quality data means the model learns from accurate information, reducing the chances of generating inaccurate outputs.

  2. Data Diversity and Representativeness Checks: A governance process can ensure that data reflects the diversity needed to cover a model’s intended applications. When training data is representative, the model is less likely to make incorrect generalizations or produce "hallucinated" outputs that stem from narrow or skewed data.

  3. Data Labeling and Annotation Standards: Clear standards for data labeling and annotation ensure that data is organized and categorized correctly, which improves model understanding. Well-labeled data allows the model to establish correct associations, minimizing errors and the chances of hallucinations.

  4. Source Verification and Credibility Checks: Establishing governance requirements for source verification can help ensure that the model is trained on credible, trustworthy data sources, preventing it from picking up misinformation. For example, models in healthcare could be required to use data from validated clinical studies, reducing the risk of hallucinated

  5. Feedback Loops and Real-Time Monitoring: Encourage the integration of feedback mechanisms that allow users to report AI hallucinations. Real-time monitoring systems, especially in critical applications like healthcare, could help flag and prevent the dissemination of potentially dangerous inaccuracies.

  6. Incentivize Post-Deployment Testing: Post-deployment testing can identify and correct hallucinations as models encounter real-world data. Governance can support such testing through funding incentives, regulatory requirements, and research grants for ongoing model assessment.

  7. Ethical and Accountability Frameworks: Governance structures that hold developers accountable for harm caused by AI hallucinations could encourage safer practices. Ethical frameworks that prioritize user safety, accuracy, and transparency can be enforced through penalties, which encourage organizations to proactively minimize risks.

There you have it! In case you're wondering if an AI fed me all the above information, the answer is, sadly "no". I wish they could have spit out all this information because it's a subject they should ideally all be well versed in — am I right? I used multiple AIs to research the topic and to help me organize the content, but I determined what should go in there myself. I learned that modern generative AIs don't do all the above, neither are they generally aware of it without a lot of coaxing.

It's my hope that someday soon, AI companies will get responsible about Governance and ensure all their services are ethical, secure and accurate. Providers can keep slapping on after-market fixes and bolt-on checks, but to do it right, Governance needs to cover the entire process from data gathering and model design. Until they wise up and do it right, it's up to us, the consumers, to demand higher standards.

~Tom/*

Comments


bottom of page