AI Principles
Guiding Principles for Responsible AI: Ethics, Safety, and Risk Management

THIS IS A CONCEPT DRAFT OF THE AIEI DECLARATIVE AI PRINCIPLES
Version | Date | Description |
---|---|---|
Draft Creation | January 20, 2025 | Principles discussion |
Revision 0.1 | February 7, 2025 | Pre-publishing draft, concept-draft finalizing |
Revision 0.2 | February 24, 2025 | Concept draft publishing for public discussion |
Revision 0.3 | March 21, 2025 | Version 1.0 approval (Planned) |
At this stage, we are presenting for review and discussion the concept draft of the AIEI declarative AI principles, which our association is developing. If you are interested in participating in the discussion, creation, editing, or other activities related to the AIEI declarative AI principles, please contact us at the following email address: pm@ai-ei.org. The submission deadline is March 10, 2025 (23:59 CET time). We encourage everyone to support and engage in the discussion. Please note that the final first edition will be published on March 21, 2025.
AIEI DECLARATIVE PRINCIPLES FOR RESPONSIBLE USE, DEVELOPMENT, AND IMPLEMENTATION OF ARTIFICIAL INTELLIGENCE (For Members of the AIEI Association) |
This Declaration outlines the core principles for the responsible use, development, and implementation of artificial intelligence (AI) technologies based on the best international practices in ethics, safety, and risk management. Its primary objective is to ensure a balance between the innovative advancement of AI and the protection of human rights and freedoms, adherence to ethical standards, and social responsibility.
The Declaration is voluntary and advisory in nature. It does not replace or take precedence over national legislation or international legal instruments. In the event of any discrepancies between the provisions of this Declaration and the applicable laws of the jurisdiction in which a member of the Association is registered or operates, the relevant legal norms and requirements of the competent regulatory authorities shall apply.
The Association may issue the AIEI Declarative Certificate for Responsible AI Development, Implementation and Use (AIE Declarative AI Certificate) to members who confirm their support for these principles and consider the possibility of incorporating them into their approaches to working with AI technologies. The AIE Declarative AI Certificate is a voluntary initiative and does not impose any obligations or restrictions on the members of the Association.
1. Core principles for responsible AI development, implementation, and use
Ethics and social responsibility
The development and use of AI considers ethical principles, respect for human rights, and social responsibility. Considering the potential social, economic, and cultural impact of AI-related decisions can help ensure responsible and well-informed implementation.
Where relevant, developers and users of AI may explore the broader implications of their technologies and adopt approaches that support inclusivity, social well-being, and fairness.
AI has the potential to address socially significant challenges and contribute to sustainable development. Its application can be guided by the aim of creating positive societal impact while respecting ethical considerations.
Privacy and Data Protection
AI systems should respect international standards for personal data protection. Data processing is carried out only for specified and lawful purposes, considering the principles of minimization, transparency, and security.
Where applicable, steps can be taken to anonymize data and reduce the risk of re-identification, with documentation and periodic review of applied methods. Transparency involves providing users with clear and accessible information about how AI works, its capabilities, limitations, and decision-making criteria. When AI decisions may affect individuals’ rights or obligations, users should have the opportunity to understand the logic behind the decision and its key factors.
In the development and deployment of AI systems, attention can be given to protecting data subject rights, including access, correction, and deletion of personal data, as well as the option to opt out of automated data analysis (TDM opt-out). Data deletion procedures and user request handling should be clearly outlined, transparent, and accessible.
Safety and AI risk management
Ensuring the safety and reliability of AI systems is an important aspect of responsible development and use. Where applicable, assessing potential risks, following relevant standards, and implementing protective measures can help reduce vulnerabilities and improve system resilience.
A structured approach can help manage risks and align systems with ethical standards. Stages may include:
- Planning – considering potential risks, ensuring data quality, and addressing biases.
- Development – conducting testing, validating performance, and documenting methodologies.
- Implementation – promoting transparency and ensuring AI decisions can be reviewed when needed.
- Monitoring – observing system performance, resolving issues, and refining models in response to evolving standards.
For AI systems with higher risk factors, additional attention can be given to security updates, self-monitoring mechanisms, and the ability to address critical issues.
Provide necessary information to users during their interaction with AI systems in a clear and accessible way to support informed decision-making.
Post-market monitoring and ongoing improvements can help maintain system performance and alignment with evolving safety and ethical standards.
Prevention of manipulation, social control, and data abuse
AI systems should not be used in ways that undermine human autonomy or restrict freedom of choice. Where applicable, measures can be taken to prevent the use of subliminal influence techniques that manipulate behavior or shape decisions beyond conscious awareness.
The use of AI for social scoring, such as evaluating or classifying individuals based on their behavior, social status, or economic factors, should be approached with caution and aligned with ethical principles to prevent discriminatory outcomes.
Special consideration may be given to real-time biometric identification systems, particularly regarding their deployment in public spaces. Limiting their use can help prevent excessive control, intrusion into private life, and potential violations of privacy rights.
Fairness and non-discrimination
Fairness, equal opportunities, and respect for human rights are among the priorities of AI systems development and use. Preventing biased or unfair treatment based on characteristics such as race, gender, age, nationality, or religion contributes to ethical and responsible AI practices.
Ensuring broad access to AI-driven solutions is important for inclusivity and fairness. AI systems take into account, where applicable, the needs of different communities and social groups, avoiding barriers that could exclude certain populations.
To ensure fairness and mitigate bias, evaluations of AI algorithms may be conducted by developers, independent audits, or self-regulating committees, depending on the context and applicable best practices.
Workforce adaptation and human oversight
The integration of AI into work processes should take into account its impact on employment, professional roles, and workforce development. Providing clear pathways for skills adaptation and career transitions can help employees adjust to evolving job requirements. Where relevant, access to reskilling, upskilling, and professional support programs can contribute to a balanced approach to technological change while respecting labor rights.
Human oversight remains an important element in AI-assisted decision-making, particularly in areas affecting individual rights, safety, and employment. Establishing review and intervention mechanisms where needed can help maintain transparency, accountability, and trust in AI systems.
Green AI
The development and use of AI can contribute to environmental sustainability by reducing its ecological impact. Efforts to optimize energy consumption and minimize the carbon footprint in AI development, training, and operation can support more sustainable technology practices.
AI can also be leveraged for environmental monitoring and ecosystem protection. Implementing innovative solutions that reduce resource consumption and support climate initiatives can enhance the responsible use of AI in addressing global environmental challenges.
Support for science and innovation
Advancing scientific research and technological innovation in AI benefits from open collaboration and interdisciplinary exchange. Encouraging initiatives that align with ethical principles, transparency, and respect for intellectual property rights can contribute to responsible AI development.
Controlled testing environments, such as regulatory sandboxes, can provide a structured space for experimenting with new AI solutions while maintaining high safety and ethical standards. This approach helps balance innovation with risk management and regulatory compliance.
Promoting AI literacy among users and developers plays a key role in fostering responsible adoption. Increasing awareness of AI’s capabilities, risks, and legal considerations through accessible education and training programs can help ensure informed and ethical use of AI technologies.
2. Voluntary adoption of principles
The adoption of these principles is voluntary and aims to support best international practices in ethics, safety, and responsible risk management in the development, implementation, and use of AI technologies.
Each member of the Association independently determines the approaches to implementing these principles in their activities, which may include the development of internal policies and procedures. This approach helps improve management efficiency, enhance reputation, and build trust among partners and clients.
In the event of circumstances that may indicate a deviation from the declared principles, members may conduct an internal assessment of the situation and, if necessary, seek methodological support from the Association.
3. The AIE Declarative AI Certificate
The AIE Declarative AI Certificate may be issued to Association members upon confirmation of their support for the principles outlined in this Declaration.
The AIEI Declarative AI Certificate remains valid until the end of the current membership period in the Association, provided that the principles outlined in the Declaration are followed, the internal rules of the Association are respected, and organizational and financial obligations are fulfilled. If the membership period ends without renewal, the AIE Declarative AI Certificate’s validity automatically terminates without further notice.
The Association reserves the right to revoke the AIE Declarative AI Certificate in the following cases:
- submission of a written request by the certificate holder for voluntary withdrawal;
- established evidence of serious or repeated violations of the principles outlined in the Declaration;
- receipt of substantiated complaints from third parties regarding violations of the principles;
- failure to meet organizational or financial obligations to the Association.
The AIE Declarative AI Certificate revocation procedure is conducted based on the decision of the relevant committee of the Association following an assessment of the circumstances. Before a decision is made, the certificate holder has the right to provide written explanations and supporting documents to defend their position.
4. Disclaimer
The Association does not monitor or supervise the activities of its members concerning compliance with the principles outlined in this Declaration, and is not responsible for the consequences of their application or non-compliance in professional or organizational activities.
If there are changes in legislation that affect the interpretation of these principles, participants should follow the laws of their respective countries or territories. If a company provides AI solutions that are used in the European Union, the provisions of the AI Act will apply unless otherwise specified by international or national regulations.
Acceptance of the terms of this Declaration is a voluntary expression of support for the declared principles and does not create any legally binding obligations between the Association and its members, except where explicitly stated in the Declaration.
The AIE Declarative AI Certificate issued by the Association does not have regulatory status and does not confirm compliance with legislative requirements. It is a voluntary initiative that reflects the support of ethical principles and best practices in responsible AI use.
The Association shall not be held liable for any risks or consequences related to the use of AI technologies by its members, including cases of non-compliance with the requirements of the Declaration or applicable legislation.
5. Final provisions
Members of the Association will be notified of any changes to this Declaration in writing or through the official communication channels of the Association. Given the voluntary and declarative nature of the Principles, members have the right not to accept the new version of the Declaration. In such cases, the previous version remains valid for them until the end of the current membership period.
All disputes arising from the interpretation or application of this Declaration shall be resolved through consultations between the parties. If a mutual resolution cannot be reached, the member may request additional clarification from the relevant committee of the Association.
AI Horizon Conference
AI entrepreneurs, investors and leaders will gather at the AI Horizon Conference to connect the AI world.
