Purpose
This policy establishes LiquidQube’s commitment to responsible, ethical, and transparent use of Artificial Intelligence (AI). It provides a framework for the development, deployment, and management of AI technologies, ensuring alignment with our core values, stakeholder expectations, and applicable legal and regulatory requirements
Scope
This policy applies to all AI systems, tools, and processes developed, procured, or used by LiquidQube. It encompasses employees, contractors, vendors, and any third parties involved in the lifecycle of AI systems within our organisation.
Principles of AI Governance
Ethical AI Development
Design and develop AI systems that respect human rights, privacy, and dignity.
Avoid the creation or use of AI systems that promote discrimination, bias, or harm.
Ensure AI systems operate transparently, providing clear and understandable outputs.
Compliance with Legal and Regulatory Standards
Adhere to all relevant AI, data protection, and privacy laws, including UK GDPR, EU AI Act, US AI-related regulations, and Brazilian data protection laws (LGPD).
Regularly review AI systems to ensure ongoing compliance with evolving regulatory frameworks.
Transparency and Explainability
Ensure AI models and processes are interpretable and their decisions can be explained in a manner understandable to stakeholders.
Maintain documentation for all AI systems, including their purpose, limitations, and underlying data sources.
Accountability
Define clear roles and responsibilities for the governance, oversight, and management of AI systems.
Establish a dedicated AI Ethics Committee to oversee the ethical use and deployment of AI across the organisation.
Privacy and Data Security
Prioritise the protection of personal and sensitive data used in AI systems through robust encryption, anonymisation, and secure data handling practices.
Ensure AI systems comply with LiquidQube’s Privacy and Data Security policies.
Bias and Fairness
Continuously evaluate AI systems for potential biases and implement measures to mitigate them.
Strive to build inclusive AI systems that cater to diverse user groups without discrimination.
Human Oversight
Ensure human oversight in the operation of AI systems, particularly in high-risk applications or decision-making processes.
Empower employees to intervene, override, or halt AI systems when necessary to prevent unintended outcomes.
Continuous Improvement
Regularly monitor and assess AI systems for performance, security, and compliance.
Incorporate stakeholder feedback and emerging best practices to enhance AI systems and governance frameworks.
Risk Management
Identify and assess potential risks associated with AI systems during development, deployment, and operation.
Implement mitigation strategies and maintain a risk registry for ongoing monitoring and management
Governance Framework
AI Ethics Committee
The AI Ethics Committee oversees the responsible development and use of AI at LiquidQube. Its responsibilities include:
Reviewing AI projects for ethical and compliance concerns.
Monitoring AI systems for fairness, bias, and transparency.
Approving high-risk AI applications before deployment.
Internal Audits
Regular audits will be conducted to assess the compliance, performance, and ethical alignment of AI systems. Audit findings will be reported to the AI Ethics Committee and senior leadership.
Employee Training
Employees involved in the design, deployment, or use of AI systems must undergo regular training on:
Ethical AI development.
Regulatory compliance and legal frameworks.
Identifying and mitigating risks in AI systems.
Incident Reporting and Response
A formal process is in place for reporting and addressing incidents involving AI systems. Reports are reviewed promptly, and corrective actions are implemented as necessary.
Metrics and KPIs
To ensure the effectiveness of AI governance, LiquidQube will track the following metrics:
Number of AI systems reviewed and approved by the AI Ethics Committee.
Percentage of AI systems compliant with regulatory and ethical standards.
Frequency of AI-related incidents and resolution times.
User and stakeholder satisfaction scores regarding AI transparency and outcomes.
Non-Compliance
Non-compliance with this policy may result in disciplinary action, up to and including termination of employment or termination of contracts with third parties. Violations will be reviewed by the AI Ethics Committee and appropriate measures taken to prevent recurrence
Review and Updates
This policy will be reviewed annually to align with advancements in AI technology, regulatory changes, and evolving societal expectations. Updates will be communicated to all relevant stakeholders.
Contact Information
For any questions or concerns regarding this policy, please contact the AI Governance Team at compliance@liquidqubegroup.com.