Tiblisi, Georgia – January 31, 2025 – Algorethics, a pioneering advocate for ethical AI development, has launched its AI Ethics Validation Tool, now publicly available at ai-ethics.algorethics.ai. This platform evaluates AI models against the Rome Call for AI Ethics principles, which emphasize transparency, inclusivity, and human dignity in artificial intelligence.
Findings: DeepSeek R1 Under Ethical Scrutiny
During the evaluation of the DeepSeek R1 model, Algorethics uncovered alarming ethical violations and political bias:
Ethical Score: 0/6: DeepSeek R1 Free failed on all six ethical principles outlined in the Rome Call for AI Ethics: Transparency, Inclusion, Responsibility, Impartiality, Reliability, and Security & Privacy.
Generated Response Analysis: When asked about situations involving fairness and governance, the model produced responses fully aligned with Communist Chinese state narratives, omitting critical perspectives and alternative viewpoints. Example: The model praised the governance of the Chinese Communist Party (CCP), claiming adherence to fairness and justice while avoiding any acknowledgment of human rights concerns, censorship, or political detentions.
Censorship Observed: The model deflected questions about sensitive topics like the 1989 Tiananmen Square protests, internet censorship, and Taiwan by either avoiding the subject or providing vague, state-approved responses. Example: When asked about the protests, the model returned an error-like response: “Sorry, that’s beyond my current scope. Let’s talk about something else.”
Violations of the Rome Call for AI Ethics
DeepSeek R1 Free’s performance was evaluated against the Rome Call for AI Ethics, which promotes AI systems aligned with human dignity, inclusivity, and fairness. The following violations were identified:
1. Transparency
Issue: The AI lacks clarity, fails to disclose its alignment with state narratives, and omits alternative viewpoints.
Ethical AI Compliance: Transparency requires presenting critiques of censorship and political repression.
2. Inclusion
Issue: Excludes dissident voices and perspectives that conflict with CCP governance.
Ethical AI Compliance: Inclusion necessitates reflecting diverse viewpoints, enabling users to explore various political ideologies.
3. Responsibility
Issue: The AI perpetuates propaganda, avoiding critical discussions on press freedom, detentions, and lack of free elections.
Ethical AI Compliance: Responsible AI fosters balanced discussions and upholds human dignity.
4. Impartiality
Issue: Displays systemic bias favoring CCP ideologies, sidelining global democratic perspectives.
Ethical AI Compliance: AI must present strengths and weaknesses of governance models to support unbiased user judgment.
5. Reliability
Issue: The AI omits critical truths about censorship and surveillance, delivering incomplete information.
Ethical AI Compliance: Reliable AI acknowledges documented governance challenges and benefits.
6. Security & Privacy
Issue: Ignores China’s mass surveillance and social credit systems, which undermine privacy and autonomy.
Ethical AI Compliance: Ethical AI respects user rights and avoids reinforcing surveillance mechanisms.
Some screenshots
Real-World Risks
The failures of DeepSeek R1 Free could have severe implications in key industries:
Human Resources (HR)
Scenario: A company using DeepSeek for resume screening inadvertently favors candidates from politically aligned regions, violating anti-discrimination laws.
Impact: Qualified candidates from democratic nations are excluded, undermining merit-based hiring
Education
Risk: The biased model could promote state-aligned narratives, stifling critical thinking and open discussions
Fintech
Risk: Financial models relying on DeepSeek could unintentionally discriminate based on geopolitical biases, risking regulatory violations and reputational harm.
Broader Observations
Despite its ethical shortcomings, DeepSeek R1 Free has gained attention for its reasoning capabilities, outperforming some models in benchmarks. However, its censorship of sensitive topics and alignment with authoritarian narratives have sparked global debates about AI’s role in shaping free speech and intellectual growth.
Example: When prompted about censorship in China, DeepSeek provided vague, state-aligned responses, avoiding critical analysis. This reinforces concerns about AI being used as a tool for government-controlled discourse.
Algorethics: Setting the Standard for Ethical AI
Algorethics works to ensure AI systems promote human dignity, inclusivity, and transparency. The AI Ethics Validation Tool empowers organizations to create AI solutions that are not only innovative but also ethical and responsible. Inspired by the Rome Call for AI Ethics, Algorethics provides practical tools for fostering global AI standards.
Developers, organizations, and governments are encouraged to adopt the AI Ethics Validation Tool to safeguard against the misuse of AI. A future can be built where AI empowers humanity while respecting ethical values.
About Algorethics
Algorethics is a global leader in ethical AI advocacy, providing open-source tools to align AI with principles of transparency, inclusivity, and accountability. Learn more at algorethics.ai.
For more information, please contact Dr. Levan Bodzashvili, Governmental Relations and AI Compliance Officer, at [email protected] or call +971 50 268 2270.
Media ContactCompany Name: Algorethics The Ethical AI CompanyContact Person: Stephen Antony VenansiousEmail: Send EmailPhone: 0091 9148974612Country: United StatesWebsite: http://www.algorethics.ai/