alt Bern
|
alt Lisbon
|
alt New York
info@ai-ei.org
+351 93 832 8533
Become a Member
alt Bern
|
alt Lisbon
|
alt New York
info@ai-ei.org
+351 93 832 8533

Updated AIEI AI Principles: Strengthening Responsible AI Practices

Updated AIEI AI Principles: Strengthening Responsible AI Practices

The updated AIEI AI Principles reflect the Association’s continued effort to ensure that its framework for responsible artificial intelligence remains practical, relevant and aligned with evolving international approaches.

The revised Principles provide organizations with a structured and consistent approach to the responsible use, development, and implementation of AI systems. They are designed to support organizations across the entire AI lifecycle, from design and procurement to deployment, monitoring and retirement, while maintaining an appropriate balance between innovation, human rights and ethical responsibility.

Why Do These Principles Matter?

AI integration into business processes is no longer a question of the future, it is the reality of today. With that comes growing responsibility: for data quality, algorithmic transparency and the real-world consequences of automated decisions on people’s lives.

The AIEI Principles provide a voluntary framework that helps organisations clearly articulate their approach to AI – from design and procurement through to monitoring and retirement. They do not replace applicable law, but offer a practical reference point for responsible AI use across the full system lifecycle, maintaining a balance between innovation, human rights, and ethical responsibility.

Core Principles for Responsible AI

The framework covers the entire AI lifecycle and is structured around ten interconnected principles:

  1. Respect for Human Rights and Ethics
    AI systems should respect fundamental rights, freedoms and human dignity, ensuring compliance with legal and ethical standards. Organizations should assess potential impacts on individuals, especially in sensitive contexts.
  2. Human Oversight
    AI should support, not replace, human decision-making. Meaningful human control must be ensured, particularly in high-impact or high-risk scenarios.
  3. Fairness and Non-Discrimination
    AI systems should avoid bias and ensure equitable outcomes. Organizations should actively identify and mitigate risks of discrimination in data and algorithms.
  4. Privacy and Data Protection
    Personal data is collected only where there is a valid legal basis and a legitimate purpose. Individuals’ data rights must be respected throughout the AI lifecycle.
  5. Transparency (Explainability)
    AI use should be clearly disclosed, and key decisions should be explainable in understandable terms. Stakeholders should be informed about how AI systems operate.
  6. Safety, Security, Robustness and Reliability
    AI systems must function reliably and securely, with safeguards against failures, misuse, or attacks. Continuous monitoring and improvement are required.
  7. Accountability and Governance
    Clear lines of responsibility must be established, including for third-party AI tools obtained from vendors, contractors, or external platforms.
  8. Societal Benefit and Responsibility
    AI should be used in ways that benefit society and avoid harm. Organizations should consider broader societal impacts beyond immediate business value.
  9. Environmental Sustainability (Green AI)
    AI development should take into account environmental impact, promoting energy efficiency and sustainable practices where possible.
  10. Awareness, Education and Workforce Adaptation
    Organizations should promote AI literacy and support workforce adaptation. Employees must understand how to use AI responsibly and effectively.

Practical Application for Organizations

Organizations are encouraged to:

  • declare their support for the Principles
  • integrate them into internal policies and procedures
  • apply them proportionately, based on the context, purpose and risk level of each AI use case

This flexible approach allows organizations to adapt the Principles to their specific operational environments while maintaining alignment with internationally recognized standards.

AIEI Declarative AI Certificate

Members of the Association who confirm their support for the Principles may obtain the AIEI Declarative AI Certificate (AI Verified badge).

The certificate:

  • reflects an organization’s commitment to responsible AI practices
  • supports transparency and credibility in the market
  • serves as a visible signal of alignment with ethical and governance standards

The initiative is voluntary and does not impose regulatory obligations. Instead, it is intended to encourage responsible practices and continuous improvement.

The updated version of the Principles was developed with the contribution of more than 12 experts from 9 countries, led by the AIEI Legal Committee  including Amisha Mittal, Kateryna Kernoz,  Hysmir Idrizi, Jae-Seong Lee, Taras Lytovchenko and to all contributors involved.

We welcome feedback and suggestions as part of our continuing work to strengthen these Principles.

The full version of the AIEI AI Principles is available here: https://ai-ei.org/ai-principles/ 

Event

AI Horizon Conference

The AI Horizon Conference brought together entrepreneurs, investors and industry leaders in Lisbon to discuss key trends and shape the future of AI.

Lisbon, Portugal
View the Event Report
AI Horizon
alt alt

Join Us in Shaping the Future of Ethical AI!

Join us as a member and play a vital role in shaping a future where AI is created responsibly, with integrity, transparency, and fairness at its core.

Apply Now