Updated AIEI AI Principles: Strengthening Responsible AI Practices
The updated AIEI AI Principles reflect the Association’s continued effort to ensure that its framework for responsible artificial intelligence remains practical, relevant and aligned with evolving international approaches.
The revised Principles provide organizations with a structured and consistent approach to the responsible use, development, and implementation of AI systems. They are designed to support organizations across the entire AI lifecycle, from design and procurement to deployment, monitoring and retirement, while maintaining an appropriate balance between innovation, human rights and ethical responsibility.
Why Do These Principles Matter?
AI integration into business processes is no longer a question of the future, it is the reality of today. With that comes growing responsibility: for data quality, algorithmic transparency and the real-world consequences of automated decisions on people’s lives.
The AIEI Principles provide a voluntary framework that helps organisations clearly articulate their approach to AI – from design and procurement through to monitoring and retirement. They do not replace applicable law, but offer a practical reference point for responsible AI use across the full system lifecycle, maintaining a balance between innovation, human rights, and ethical responsibility.
Core Principles for Responsible AI
The framework covers the entire AI lifecycle and is structured around ten interconnected principles:
- Respect for Human Rights and Ethics
AI systems should respect fundamental rights, freedoms and human dignity, ensuring compliance with legal and ethical standards. Organizations should assess potential impacts on individuals, especially in sensitive contexts. - Human Oversight
AI should support, not replace, human decision-making. Meaningful human control must be ensured, particularly in high-impact or high-risk scenarios. - Fairness and Non-Discrimination
AI systems should avoid bias and ensure equitable outcomes. Organizations should actively identify and mitigate risks of discrimination in data and algorithms. - Privacy and Data Protection
Personal data is collected only where there is a valid legal basis and a legitimate purpose. Individuals’ data rights must be respected throughout the AI lifecycle. - Transparency (Explainability)
AI use should be clearly disclosed, and key decisions should be explainable in understandable terms. Stakeholders should be informed about how AI systems operate. - Safety, Security, Robustness and Reliability
AI systems must function reliably and securely, with safeguards against failures, misuse, or attacks. Continuous monitoring and improvement are required. - Accountability and Governance
Clear lines of responsibility must be established, including for third-party AI tools obtained from vendors, contractors, or external platforms. - Societal Benefit and Responsibility
AI should be used in ways that benefit society and avoid harm. Organizations should consider broader societal impacts beyond immediate business value. - Environmental Sustainability (Green AI)
AI development should take into account environmental impact, promoting energy efficiency and sustainable practices where possible. - Awareness, Education and Workforce Adaptation
Organizations should promote AI literacy and support workforce adaptation. Employees must understand how to use AI responsibly and effectively.
Practical Application for Organizations
Organizations are encouraged to:
- declare their support for the Principles
- integrate them into internal policies and procedures
- apply them proportionately, based on the context, purpose and risk level of each AI use case
This flexible approach allows organizations to adapt the Principles to their specific operational environments while maintaining alignment with internationally recognized standards.
AIEI Declarative AI Certificate
Members of the Association who confirm their support for the Principles may obtain the AIEI Declarative AI Certificate (AI Verified badge).
The certificate:
- reflects an organization’s commitment to responsible AI practices
- supports transparency and credibility in the market
- serves as a visible signal of alignment with ethical and governance standards
The initiative is voluntary and does not impose regulatory obligations. Instead, it is intended to encourage responsible practices and continuous improvement.
The updated version of the Principles was developed with the contribution of more than 12 experts from 9 countries, led by the AIEI Legal Committee including Amisha Mittal, Kateryna Kernoz, Hysmir Idrizi, Jae-Seong Lee, Taras Lytovchenko and to all contributors involved.
We welcome feedback and suggestions as part of our continuing work to strengthen these Principles.
The full version of the AIEI AI Principles is available here: https://ai-ei.org/ai-principles/
AI Horizon Conference
The AI Horizon Conference brought together entrepreneurs, investors and industry leaders in Lisbon to discuss key trends and shape the future of AI.