AI Principles
Guiding Principles for Responsible AI: Ethics, Safety, and Risk Management
THIS IS A CONCEPT DRAFT OF THE AIEI DECLARATIVE AI PRINCIPLES
| Version | Date | Description |
|---|---|---|
| Draft Creation | January 20, 2025 | Principles discussion |
| Revision 0.1 | February 7, 2025 | Pre-publishing draft, concept-draft finalizing |
| Revision 0.2 | February 24, 2025 | Concept draft publishing for public discussion |
| Revision 0.3 | March 21, 2025 | Revision after public discussion |
| Revision 1.0 | April 1, 2025 | Version 1.0 Published |
| Revision 2.0 | April 23, 2026 | Version 2.0 Published |
The updated version of the AI Principles is now published below. We sincerely thank everyone who contributed to this process. We continue to welcome additional feedback. Please share your suggestions at pm@ai-ei.org. All proposals will be carefully reviewed and considered in future updates as part of our ongoing commitment to refinement and impact.
| AIEI DECLARATIVE PRINCIPLES FOR RESPONSIBLE USE, DEVELOPMENT, AND IMPLEMENTATION OF ARTIFICIAL INTELLIGENCE (For Members of the AIEI Association) |
This Declaration outlines the core principles for the responsible use, development, and implementation of artificial intelligence (AI) technologies based on the best international practices in ethics, data privacy, safety, and risk management. These Principles apply both to AI systems developed internally and to AI systems, models, or services obtained from third parties, including vendors, contractors, and external platforms. Its primary objective is to ensure a balance between the innovative advancement of AI and the protection of human rights and freedoms, adherence to ethical standards, and social responsibility.
The Declaration is voluntary and advisory in nature. It does not replace or take precedence over national legislation or international legal instruments. In the event of any discrepancies between the provisions of this Declaration and the applicable laws of the jurisdiction in which a member of the Association is registered or operates, the relevant legal norms and requirements of the competent regulatory authorities shall apply.
These Principles should be applied across the AI lifecycle, including design, procurement, development, testing, deployment, use, monitoring, incident response, and retirement of AI systems. They should also be applied proportionately, taking into account the purpose of the AI system, the context of use, and the nature and severity of the potential impact on individuals, organizations, and society.
The Association may issue the AIEI Declarative Certificate for Responsible AI Development, Implementation and Use (AIE Declarative AI Certificate) to members who confirm their support for these principles and consider the possibility of incorporating them into their approaches to working with AI technologies. The AIE Declarative AI Certificate is a voluntary initiative and does not impose any obligations or restrictions on the members of the Association.
1. Core principles for responsible AI development, implementation, and use
1.1. Respect for Human Rights and Ethics – AI systems should be developed and deployed in ways that uphold fundamental human rights, freedoms, and dignity, grounded in both legal obligations and ethical responsibilities. This includes operating within the boundaries of all applicable laws and regulations, while also striving to promote justice, equity, and respect for the intrinsic worth of every individual. Ethical AI development begins with this foundational commitment to people’s rights and the rule of law as a non-negotiable baseline.
What this means in practice? – Organizations should assess whether an AI use case may affect people’s rights, freedoms, or dignity, especially where the system may influence access to work, education, healthcare, finance, public services, or justice. The depth and formality of such assessment should be proportionate to the level of risk and potential impact associated with the AI use case.
Example: a company should not use an AI tool to monitor workers in a way that is excessive, hidden, or inconsistent with applicable law and basic expectations of dignity and fairness.
1.2. Human Oversight: Human autonomy and decision-making should be preserved. AI should be designed to augment human capabilities, not replace or undermine them. Important decisions affecting individuals (in areas like health, finance, or justice) should include human review or the possibility of human intervention. The type and intensity of human oversight should be proportionate to the level of risk and potential impact associated with the AI system. This may include mechanisms such as human-in-the-loop (review before action), human-on-the-loop (ongoing monitoring), or human-in-command (overall accountability and control). Organizations should establish oversight mechanisms to ensure that humans remain ultimately accountable and can override or adjust AI outcomes if necessary.
What this means in practice? – Human oversight should be real and effective, not merely formal. A person responsible for review should understand what the AI output means, have enough information to assess it, and have the authority to reject, escalate, or correct the result where needed.
Example: if AI is used to rank job candidates, a trained recruiter or hiring manager should review the results before a final decision is made and should be able to disregard the AI ranking if it appears incomplete, biased, or unreasonable.
1.3. Fairness and Non-Discrimination: AI systems should treat individuals and groups fairly, avoiding biases that result in unjust or prejudicial outcomes. This involves actively identifying and mitigating any bias in data or algorithms to prevent discrimination based on attributes like race, gender, age, ethnicity, religion, or disability. Fairness also means striving for inclusive design – ensuring AI is accessible and works well for all segments of society, including marginalized or underrepresented communities. Processes such as bias audits and diverse stakeholder input should be used throughout the AI lifecycle so that outcomes are equitable and do not replicate historical injustices.
What this means in practice? – Fairness means looking for patterns that may disadvantage some people without a valid reason and taking reasonable steps to reduce that risk. Accessibility means considering whether people with different needs can meaningfully use or be fairly assessed by the system.
Example: if an AI system is used to screen loan applications, the organization should check whether the model produces systematically worse outcomes for certain groups and should investigate whether those outcomes reflect bias, poor data, or inappropriate proxy factors.
1.4. Privacy and Data Protection: AI systems should adhere to strict data protection standards, collecting and using personal data only where there is a valid legal basis, a legitimate and specified purpose and minimizing data whenever possible. Individuals’ personal data rights – including the rights to information, access, correction, deletion, and to opt out of automated processing – should be preserved. Wherever feasible, data used in AI should be anonymized or encrypted to protect identities. Organizations need robust data governance practices to ensure security and confidentiality, and they should be transparent about what data is being used and why.
What this means in practice? – Organizations should know what data, including personal data, an AI system uses, why that data is needed, how long it is kept, who can access it, and what rights individuals may have in relation to it.
Example: before using an external AI tool to summarize customer communications, an organization should check whether customer data is retained by the provider, whether it is used to train the provider’s systems, and whether appropriate safeguards and notices are in place.
1.5. Transparency (Explainability): AI operations should be transparent to developers, users, and impacted persons. This means it should be clear when people are interacting with or subject to an AI system (rather than a human), and stakeholders should have access to understandable information about how the AI works. Explainability is key – the logic behind significant AI decisions or outcomes should be documented and, where appropriate, explained in plain language. For high-stakes or impactful applications, organizations should provide explanations for AI decisions (e.g., why an application was denied by an algorithm) and disclose the main factors involved.
What this means in practice? – People should know when AI is being used in a meaningful way, and important AI-assisted outcomes should be understandable.
Example: a chatbot should disclose that it is AI-assisted, and a person affected by an AI-supported decision should be able to receive a plain-language explanation of the main reasons for that outcome.
1.6. Safety, Security, Robustness, and Reliability: AI systems should be safe and reliable in their design and deployment. This involves thorough testing and validation to ensure systems perform as intended under expected conditions, and robust engineering to handle errors, adversarial attacks, or unexpected inputs gracefully. AI should also have safeguards to prevent harm: if a model behaves unpredictably or hazards are detected, there should be mechanisms to shut it down or revert to a safe state. Cybersecurity is part of safety – AI models and data should be protected against unauthorized access or manipulation. Even after deployment, AI outcomes should be continuously tracked, with updates and improvements made to fix vulnerabilities or improve accuracy. The goal is to minimize risks of physical or digital harm to individuals, organizations, and society at large from AI failures or misuse.
What this means in practice? – Responsible AI is not only about cybersecurity. It also includes whether the system works properly, whether it behaves predictably, and whether the organization can respond if something goes wrong.
Example: if an AI tool used for customer support starts giving clearly inaccurate or unauthorized responses after an update, the organization should be able to detect the issue, limit the tool’s use, and correct the problem.
1.7. Accountability and Governance: Organizations and individuals developing, deploying, procuring, licensing, integrating, or otherwise relying on AI systems, models, or services should be accountable for their use, operation, and impact within their area of responsibility. Clear lines of responsibility should be established so that it is known which team, function, or role is responsible for governance, review, oversight, risk assessment, documentation, incident response, and remediation. This applies both to AI systems developed internally and to third-party AI systems, models, or services obtained from vendors, contractors, partners, or external platforms. Organizations should take reasonable steps to assess the suitability, risks, and governance implications of AI systems before and during use, including, where appropriate, their intended purpose, known limitations, security and privacy implications, human oversight requirements, data use practices, contractual protections, incident reporting arrangements, and the provider’s approach to testing, updates, and change management. Accountability also includes maintaining appropriate documentation and traceability, being prepared to explain and justify AI-supported outcomes, and taking corrective action where issues, harms, or complaints arise. Where appropriate, organizations should also consider providing a route to review, challenge, or correct materially significant AI-supported outcomes.
What this means in practice? – Someone should clearly own the AI use case, the controls around it, and the response if there is an error, complaint, or vendor-related issue. Responsible AI governance also applies when the organization does not build the system itself. Before relying on external AI tools, the organization should understand what the tool does, what data it uses, what risks it creates, and what controls are available.
Example: If an organization adopts a third-party AI note-taking tool for internal meetings, it should identify who is responsible for approving the use case, check whether meeting data is stored or used to train the provider’s models, confirm what security and deletion controls exist, define when the tool may or may not be used, and have a process to respond if the tool creates an error, confidentiality issue, or complaint.
1.8. Societal Benefit and Responsibility: AI should be developed and used in ways that benefit society and promote the public good. This means prioritizing applications that address social challenges (such as improving healthcare, education, accessibility, or environmental protection) and steering away from uses that could cause societal harm or injustice. AI initiatives should undergo societal impact evaluations – considering broader effects on communities, social structures, or democracy. Responsible use also implies avoiding AI applications that may enable mass surveillance, social scoring, or manipulative influence that threatens societal values. Whenever AI might significantly affect people’s lives (jobs, opportunities, rights), it should be deployed with caution, transparency, and in dialogue with the affected communities.
What this means in practice? – Organizations should not look only at whether an AI system is efficient or profitable. They should also think about whether its use could create broader harm or undermine trust.
Example: a company should be cautious before using AI tools that profile individuals in sensitive contexts or encourage highly manipulative behavior toward vulnerable users.
1.9. Environmental Sustainability (Green AI): All AI development should consider its environmental footprint and strive to be sustainable. This includes optimizing AI algorithms and infrastructure for energy efficiency to reduce carbon emissions (for example, improving the efficiency of data centers or model training processes). Organizations are encouraged to monitor and report the energy usage and environmental impact of their AI systems, and to innovate new techniques (like more efficient algorithms or hardware) that make AI greener. Beyond minimizing harm, AI can also be a tool for environmental good – for instance, using AI in climate science, biodiversity monitoring, or resource management to help combat environmental challenges.
What this means in practice? – Organizations should not assume that larger or more resource-intensive AI systems are always the better choice.
Example: if a simple rules-based tool or smaller model can achieve the same internal business purpose, an organization may choose that approach instead of a more resource-intensive model.
1.10. Awareness, Education and Workforce Adaptation : Organizations should promote AI literacy – educating employees, users, and the public about what AI is, how it works, and its potential impacts (both positive and negative). An informed society is better equipped to hold AI systems accountable and engage in meaningful dialogue about AI’s role. The integration of AI into work processes should take into account its impact on employment, professional roles, and workforce development. Providing clear pathways for skills adaptation and career transitions can help employees adjust to evolving job requirements. Where relevant, access to reskilling, upskilling, and professional support programs can contribute to a balanced approach to technological change while respecting labor rights.
What this means in practice? – People cannot oversee or use AI responsibly if they do not understand what the system does, what its limits are, and what their own responsibilities are.
Example: of a legal, HR, or compliance team begins using an AI drafting or review tool, the organization should train the team on acceptable use, verification expectations, confidentiality risks, and escalation rules.
2. Voluntary adoption of principles
The adoption of these principles is voluntary and aims to support best international practices in ethics, safety, and responsible risk management in the development, implementation, and use of AI technologies.
Each member of the Association independently determines the approaches to implementing these principles in their activities, which may include the development of internal policies and procedures. This approach helps improve management efficiency, enhance reputation, and build trust among partners and clients.
In the event of circumstances that may indicate a deviation from the declared principles, members may conduct an internal assessment of the situation and, if necessary, seek methodological support from the Association.
3. The AIEI Declarative Certificate
3. The AIEI Declarative Certificate
The AIEI Declarative Certificate may be issued to Association members upon confirmation of their support for the principles outlined in this Declaration.
The AIEI Declarative Certificate remains valid until the end of the current membership period in the Association, provided that the principles outlined in the Declaration are followed, the internal rules of the Association are respected, and organizational and financial obligations are fulfilled. If the membership period ends without renewal, the AIEI Declarative Certificate’s validity automatically terminates without further notice.
The Association reserves the right to revoke the AIEI Declarative Certificate in the following cases:
- submission of a written request by the certificate holder for voluntary withdrawal;
- established evidence of serious or repeated violations of the principles outlined in the Declaration;
- receipt of substantiated complaints from third parties regarding violations of the principles;
- failure to meet organizational or financial obligations to the Association.
The AIE Declarative AI Certificate revocation procedure is conducted based on the decision of the relevant committee of the Association following an assessment of the circumstances. Before a decision is made, the certificate holder has the right to provide written explanations and supporting documents to defend their position.
4. Disclaimer
The Association does not monitor or supervise the activities of its members concerning compliance with the principles outlined in this Declaration, and is not responsible for the consequences of their application or non-compliance in professional or organizational activities.
If there are changes in legislation that affect the interpretation of these principles, participants should follow the laws and regulatory requirements applicable to their activities .
Acceptance of the terms of this Declaration is a voluntary expression of support for the declared principles and does not create any legally binding obligations between the Association and its members, except where explicitly stated in the Declaration.
The AIE Declarative AI Certificate issued by the Association does not have regulatory status and does not constitute regulatory approval, legal certification, or independent verification of compliance with applicable law. It is a voluntary initiative that reflects the support of ethical principles and best practices in responsible AI use.
The Association shall not be held liable for any risks or consequences related to the use of AI technologies by its members, including cases of non-compliance with the requirements of the Declaration or applicable legislation.
The short explanations and examples included in this Declaration are illustrative only. They are intended to support practical understanding and do not limit the meaning or application of the relevant Principle.
5. Final provisions
Members of the Association will be notified of any changes to this Declaration in writing or through the official communication channels of the Association. Given the voluntary and declarative nature of the Principles, members have the right not to accept the new version of the Declaration. In such cases, the previous version remains valid for them until the end of the current membership period.
All disputes arising from the interpretation or application of this Declaration shall be resolved through consultations between the parties. If a mutual resolution cannot be reached, the member may request additional clarification from the relevant committee of the Association.
AI Horizon Conference
The AI Horizon Conference brought together entrepreneurs, investors and industry leaders in Lisbon to discuss key trends and shape the future of AI.