AI Principles
Guiding Principles for Responsible AI: Ethics, Safety, and Risk Management

THIS IS A CONCEPT DRAFT OF THE AIEI DECLARATIVE AI PRINCIPLES
Version | Date | Description |
---|---|---|
Draft Creation | January 20, 2025 | Principles discussion |
Revision 0.1 | February 7, 2025 | Pre-publishing draft, concept-draft finalizing |
Revision 0.2 | February 24, 2025 | Concept draft publishing for public discussion |
Revision 0.3 | March 21, 2025 | Revision after public discussion |
Revision 1.0 | April 1, 2025 | Version 1.0 Published |
The public discussion has now concluded, and we sincerely thank everyone for their participation. Version 1.0 is published below. If you have additional suggestions, please send them to pm@ai-ei.org. All proposals will be considered in the upcoming rounds of discussion and revision, planned for Q2 and Q3. Advance notice will be provided.
AIEI DECLARATIVE PRINCIPLES FOR RESPONSIBLE USE, DEVELOPMENT, AND IMPLEMENTATION OF ARTIFICIAL INTELLIGENCE (For Members of the AIEI Association) |
This Declaration outlines the core principles for the responsible use, development, and implementation of artificial intelligence (AI) technologies based on the best international practices in ethics, safety, and risk management. Its primary objective is to ensure a balance between the innovative advancement of AI and the protection of human rights and freedoms, adherence to ethical standards, and social responsibility.
The Declaration is voluntary and advisory in nature. It does not replace or take precedence over national legislation or international legal instruments. In the event of any discrepancies between the provisions of this Declaration and the applicable laws of the jurisdiction in which a member of the Association is registered or operates, the relevant legal norms and requirements of the competent regulatory authorities shall apply.
The Association may issue the AIEI Declarative Certificate for Responsible AI Development, Implementation and Use (AIE Declarative AI Certificate) to members who confirm their support for these principles and consider the possibility of incorporating them into their approaches to working with AI technologies. The AIE Declarative AI Certificate is a voluntary initiative and does not impose any obligations or restrictions on the members of the Association.
1. Core principles for responsible AI development, implementation, and use
Respect for Human Rights and Ethics – AI systems should be developed and deployed in ways that uphold fundamental human rights, freedoms, and dignity, grounded in both legal obligations and ethical responsibilities. This includes operating within the boundaries of all applicable laws and regulations, while also striving to promote justice, equity, and respect for the intrinsic worth of every individual. Ethical AI development begins with this foundational commitment to people’s rights and the rule of law as a non-negotiable baseline.
Human Oversight: Human autonomy and decision-making should be preserved. AI should be designed to augment human capabilities, not replace or undermine them. Important decisions affecting individuals (in areas like health, finance, or justice) should include human review or the possibility of human intervention. Organizations need to establish oversight mechanisms – such as human-in-the-loop controls or review boards – to ensure that humans remain ultimately accountable and can override or adjust AI outcomes if necessary.
Fairness and Non-Discrimination: AI systems should treat individuals and groups fairly, avoiding biases that result in unjust or prejudicial outcomes. This involves actively identifying and mitigating any bias in data or algorithms to prevent discrimination based on attributes like race, gender, age, ethnicity, religion, or disability. Fairness also means striving for inclusive design – ensuring AI is accessible and works well for all segments of society, including marginalized or underrepresented communities. Processes such as bias audits and diverse stakeholder input should be used throughout the AI lifecycle so that outcomes are equitable and do not replicate historical injustices. .
Privacy and Data Protection: AI systems should adhere to strict data protection standards, collecting and using personal data only for legitimate, consented purposes and minimizing data whenever possible. Individuals’ personal data rights – including the rights to information, access, correction, deletion, and to opt out of automated processing – should be preserved. Wherever feasible, data used in AI should be anonymized or encrypted to protect identities. Organizations need robust data governance practices to ensure security and confidentiality, and they should be transparent about what data is being used and why.
Transparency (Explainability): AI operations should be transparent to developers, users, and impacted persons. This means it should be clear when people are interacting with or subject to an AI system (rather than a human), and stakeholders should have access to understandable information about how the AI works. Explainability is key – the logic behind significant AI decisions or outcomes should be documented and, where appropriate, explained in plain language. For high-stakes or impactful applications, organizations should provide explanations for AI decisions (e.g., why an application was denied by an algorithm) and disclose the main factors involved.
Information Security: AI systems should be safe and reliable in their design and deployment. This involves thorough testing and validation to ensure systems perform as intended under expected conditions, and robust engineering to handle errors, adversarial attacks, or unexpected inputs gracefully. AI should also have safeguards to prevent harm: if a model behaves unpredictably or hazards are detected, there should be mechanisms to shut it down or revert to a safe state. Cybersecurity is part of safety – AI models and data should be protected against unauthorized access or manipulation. Even after deployment, AI outcomes should be continuously tracked, with updates and improvements made to fix vulnerabilities or improve accuracy. The goal is to minimize risks of physical or digital harm to individuals, organizations, and society at large from AI failures or misuse.
Accountability: Organizations and individuals developing or deploying AI should be accountable for their systems’ behavior and impacts. Clear lines of responsibility should be established – it should be known who (which team or role) is answerable if an AI system causes harm or makes an error. This principle entails implementing governance structures such as ethical AI committees, audit processes, or external oversight boards to review AI initiatives. Accountability also means being proactive: conducting impact assessments and audits, keeping documentation (datasets, algorithms, decision logs) for traceability, and being prepared to explain and justify AI outcomes. When things go wrong, accountable AI practice includes rectifying issues and providing redress or remedies to affected parties if appropriate.
Societal Benefit and Responsibility: AI should be developed and used in ways that benefit society and promote the public good. This means prioritizing applications that address social challenges (such as improving healthcare, education, accessibility, or environmental protection) and steering away from uses that could cause societal harm or injustice. AI initiatives should undergo societal impact evaluations – considering broader effects on communities, social structures, or democracy. Responsible use also implies avoiding AI applications that may enable mass surveillance, social scoring, or manipulative influence that threatens societal values. Whenever AI might significantly affect people’s lives (jobs, opportunities, rights), it should be deployed with caution, transparency, and in dialogue with the affected communities.
Environmental Sustainability (Green AI): All AI development should consider its environmental footprint and strive to be sustainable. This includes optimizing AI algorithms and infrastructure for energy efficiency to reduce carbon emissions (for example, improving the efficiency of data centers or model training processes). Organizations are encouraged to monitor and report the energy usage and environmental impact of their AI systems, and to innovate new techniques (like more efficient algorithms or hardware) that make AI greener. Beyond minimizing harm, AI can also be a tool for environmental good – for instance, using AI in climate science, biodiversity monitoring, or resource management to help combat environmental challenges.
Awareness, Education and Workforce Adaptation : Organizations should promote AI literacy – educating employees, users, and the public about what AI is, how it works, and its potential impacts (both positive and negative). An informed society is better equipped to hold AI systems accountable and engage in meaningful dialogue about AI’s role. The integration of AI into work processes should take into account its impact on employment, professional roles, and workforce development. Providing clear pathways for skills adaptation and career transitions can help employees adjust to evolving job requirements. Where relevant, access to reskilling, upskilling, and professional support programs can contribute to a balanced approach to technological change while respecting labor rights.
2. Voluntary adoption of principles
The adoption of these principles is voluntary and aims to support best international practices in ethics, safety, and responsible risk management in the development, implementation, and use of AI technologies.
Each member of the Association independently determines the approaches to implementing these principles in their activities, which may include the development of internal policies and procedures. This approach helps improve management efficiency, enhance reputation, and build trust among partners and clients.
In the event of circumstances that may indicate a deviation from the declared principles, members may conduct an internal assessment of the situation and, if necessary, seek methodological support from the Association.
3. The AIE Declarative AI Certificate
The AIE Declarative AI Certificate may be issued to Association members upon confirmation of their support for the principles outlined in this Declaration.
The AIEI Declarative AI Certificate remains valid until the end of the current membership period in the Association, provided that the principles outlined in the Declaration are followed, the internal rules of the Association are respected, and organizational and financial obligations are fulfilled. If the membership period ends without renewal, the AIE Declarative AI Certificate’s validity automatically terminates without further notice.
The Association reserves the right to revoke the AIE Declarative AI Certificate in the following cases:
- submission of a written request by the certificate holder for voluntary withdrawal;
- established evidence of serious or repeated violations of the principles outlined in the Declaration;
- receipt of substantiated complaints from third parties regarding violations of the principles;
- failure to meet organizational or financial obligations to the Association.
The AIE Declarative AI Certificate revocation procedure is conducted based on the decision of the relevant committee of the Association following an assessment of the circumstances. Before a decision is made, the certificate holder has the right to provide written explanations and supporting documents to defend their position.
4. Disclaimer
The Association does not monitor or supervise the activities of its members concerning compliance with the principles outlined in this Declaration, and is not responsible for the consequences of their application or non-compliance in professional or organizational activities.
If there are changes in legislation that affect the interpretation of these principles, participants should follow the laws of their respective countries or territories. If a company provides AI solutions that are used in the European Union, the provisions of the AI Act will apply unless otherwise specified by international or national regulations.
Acceptance of the terms of this Declaration is a voluntary expression of support for the declared principles and does not create any legally binding obligations between the Association and its members, except where explicitly stated in the Declaration.
The AIE Declarative AI Certificate issued by the Association does not have regulatory status and does not confirm compliance with legislative requirements. It is a voluntary initiative that reflects the support of ethical principles and best practices in responsible AI use.
The Association shall not be held liable for any risks or consequences related to the use of AI technologies by its members, including cases of non-compliance with the requirements of the Declaration or applicable legislation.
5. Final provisions
Members of the Association will be notified of any changes to this Declaration in writing or through the official communication channels of the Association. Given the voluntary and declarative nature of the Principles, members have the right not to accept the new version of the Declaration. In such cases, the previous version remains valid for them until the end of the current membership period.
All disputes arising from the interpretation or application of this Declaration shall be resolved through consultations between the parties. If a mutual resolution cannot be reached, the member may request additional clarification from the relevant committee of the Association.
AI Horizon Conference
AI entrepreneurs, investors and leaders will gather at the AI Horizon Conference to connect the AI world.
