Based in European Union
info@ai-ei.org
+351 93 832 8533
Become a Member
Based in European Union
info@ai-ei.org
+351 93 832 8533

AI and My Rights: What You Don’t Know Could Hurt You

AI and My Rights: What You Don’t Know Could Hurt You

Artificial Intelligence (AI) is permeating every aspect of our lives—from everyday shopping to professional decision-making. However, its integration often comes with hidden threats to human rights. AI can collect and use personal data without proper consent and make decisions that may carry inherent biases. These processes frequently lack transparency, leaving the average citizen unaware of the potential consequences. In a world where algorithms are increasingly embedded in all aspects of daily life, it is crucial to understand how AI can infringe on human rights and how to prevent such violations.

Privacy Violations

Modern AI-based systems are capable of collecting, analyzing, and utilizing vast amounts of personal data, often without users’ sufficient knowledge or consent. One of the most common data collection methods is facial recognition technology, used in public places, shopping centers, and even by government bodies. These technologies can identify individuals and create databases without their awareness.

Social media algorithms provide another example of privacy infringement. Platforms deploy complex AI systems to collect and analyze users’ behavioral data for personalized content and targeted advertising. As a result, users often remain unaware of the extent of the data collected and how it is utilized by third parties. The lack of transparency and the complexity of user agreements often obscure this process.

AI systems collect data through various channels, including GPS tracking, online activity monitoring, mobile app usage, and other digital sources. The absence of clear regulations leads to scenarios where data is used for commercial and even political purposes, violating the right to privacy protected by international human rights standards.

Discrimination in Algorithms

AI algorithms, although created for objectivity and efficiency, often reproduce and even amplify existing social biases. In the financial sector, credit scoring algorithms have already sparked discussions. For instance, there was a well-known case in 2019 when an algorithm provided women with lower credit limits than men with similar financial profiles.

Gender bias also appears in the hiring process. One example is an algorithm developed by a major tech giant that discriminated against women during resume screening. This occurred because the system was trained on historical data where men predominantly held technical positions. Such an approach automatically reinforced biases against female candidates.

The justice sector has also faced issues with AI discrimination. The COMPAS algorithm, used in the U.S. to predict the likelihood of recidivism, was noted for systemic racial bias. It often and unjustifiably predicted a higher risk of repeat offenses for African American defendants compared to white defendants.

Right to Explanation

The primary aim of the right to explanation is to ensure the transparency and accountability of decision-making algorithms. In many jurisdictions, including the EU, the right to explanation is part of the broader transparency principle enshrined in the GDPR.

However, one of the main challenges in implementing the right to explanation is the phenomenon of “black boxes.” A black box, in the context of AI, refers to an algorithm whose internal processes remain hidden and incomprehensible not only to users but sometimes even to the developers themselves. This is particularly characteristic of complex models such as deep neural networks, where the interaction of numerous parameters produces conclusions that cannot be explained using traditional methods.

The opacity of black boxes has serious consequences. If an algorithm makes decisions that affect people’s lives (for example, a loan denial or a rejected rental application due to an automated verification system) without the ability to explain the rationale behind its actions, it creates risks of discrimination, errors, and violations of individual rights. The inability to understand the basis of a decision hinders the possibility of review and calls its fairness into question.

How AIEI Helps Protect Your Rights

The AIEI Association works to ensure that artificial intelligence systems are transparent, accountable, and fair. We understand that algorithm-driven decisions can significantly impact our lives—from work and daily routines to healthcare systems. Our efforts are aimed at minimizing the risks associated with the use of AI.

Firstly, we promote the principle of transparency in AI systems. This means you have the right to know how and why algorithms make certain decisions. We collaborate with companies and developers to create explainable and auditable algorithms. This helps avoid unjust decisions and ensures that AI systems act in your best interest.

Secondly, at AIEI, we provide straightforward and comprehensible materials that explain how to act if your rights are violated by AI technologies. This helps you not only understand what is happening but also have an action plan in case an algorithm makes a biased or unfair decision.

Finally, we strive to ensure that your rights in interactions with AI are reliably protected. We actively advocate for the creation of transparent standards and practices for the use of technology that prioritize people. Our goal is to ensure that AI providers responsibly implement their solutions, considering ethical principles and social consequences. We believe that accountability and openness should become the new norm in the field of artificial intelligence, ensuring the safe and fair use of these technologies for all users.

Practical Tips for Users to Effectively Demand Transparency from AI Services

  1. Read Terms and Policies start by familiarizing yourself with the terms of use, privacy policies, and data usage agreements. Look for sections that describe how the AI system collects, processes, and utilizes your data.
  2. Request Detailed Information reach out to the service provider for details about the model’s architecture, including its type and fundamental operating principles. Ask about the data sources used for training and the methods of data cleaning and preparation implemented to ensure balance and reduce bias.
  3. Check for Explainability Features modern AI systems often include built-in explainability tools that allow users to understand the logic behind decision-making. Familiarizing yourself with these features is important, as they provide insights into which factors influenced the outcome and which data were most critical to the model.
  4. Ensure Certification and Compliance verify that the AI service is certified by reputable organizations and meets industry standards such as ISO/IEC 27001 for information security management or GDPR for data protection.
  5. Look for Transparency Reports pay attention to whether the company provides regular transparency reports, as these are key indicators of a responsible approach to AI usage. Such reports should include information about algorithmic fairness, accuracy rates, error levels, and the measures taken to minimize biases.

Practical Tips on What to Do If Your Rights Are Violated by AI Systems

  1. Collect Evidence – document all possible evidence of your interactions with the AI system, including screenshots, correspondence, and logs. This will help build a solid foundation for further action when contacting the company or regulatory bodies.
  2. Contact the Company – send a request to customer support or the compliance department, clearly describing the issue and providing evidence. Ask the company to explain how the algorithm works, including the specific process by which the system makes decisions. Request an internal investigation to verify the legality and correctness of its operation.
  3. Exercise Your Right to Access Information – in many jurisdictions, you have the right to request access to the data used by the system and to receive information about the algorithms that influenced the decision.
  4. File a Complaint with a Regulator – If the company fails to take appropriate action, contact the relevant data protection authority or the agency that regulates the use of AI.
  5. Consult a Lawyer – Seek advice from a lawyer with experience in this area. They will help you thoroughly evaluate your situation, develop a legal strategy, and determine the prospects for filing a lawsuit or obtaining compensation for the violation of your rights. The expert will provide recommendations on collecting evidence, legal procedures, and potential outcomes of taking the matter to court, ensuring professional support at each stage of protecting your interests.