Levent Kenez/Stockholm
A controversial artificial intelligence (AI) initiative put forward by Turkey’s Ministry of Justice, known as the CBS Organizational Prediction Project, has raised alarm among legal experts and human rights advocates. The system, designed to automatically identify potential associations between new case entries and previously classified terrorist organizations in the national judiciary database, is being criticized for its potential to violate the presumption of innocence and international human rights standards.
The AI tool is integrated into Turkey’s National Judiciary Informatics System (UYAP) with the stated aim of enhancing the accuracy of judicial statistics and reducing human error in data entry. According to an official presentation on April 8 by the Ministry of Justice during a parliamentary committee meeting convened to assess risks in public-sector AI use and to explore the need for a legal framework, the AI model cross-references newly entered case data with an existing database of recognized terrorist organizations, automatically flagging inconsistencies and suggesting classifications.

Such automation poses serious risks, particularly regarding the presumption of innocence. Automatically labeling individuals or legal documents as affiliated with terrorist organizations based on algorithmic inference may amount to pre-judgment. This approach risks introducing systemic bias into legal proceedings and could be seen as incompatible with fundamental legal principles, including the right to a fair trial.
The CBS Organizational Prediction tool raises profound legal and ethical questions. By algorithmically linking individuals or cases to terrorist organizations, the system may effectively “tag” persons before any judicial examination has occurred. Such tagging could create irreversible reputational harm and bias judges from the outset.

The ministry defends the project by emphasizing its administrative benefits, including enhanced statistical accuracy and better reporting compliance with international organizations such as the Financial Action Task Force (FATF).
The CBS project is not an isolated example. The Ministry of Justice’s broader AI integration strategy includes at least eight other projects that may infringe on international legal norms and human rights.
The Justice Ministry acknowledges that AI systems are only as neutral as the data used to train them. Despite this, the large-scale deployment of AI in decision-support systems without full transparency on data sources raises concerns. Bias in datasets can result in discriminatory outcomes, particularly against minority groups. Without a robust mechanism to identify and mitigate biases, AI decisions risk reinforcing existing prejudices.
While the ministry claims that AI is used solely for decision support and not for making final legal judgments, the influence of such tools on human decision-makers is not trivial. If judges rely on AI outputs without understanding the logic or data behind them, it compromises the legal principle of reasoned judgment. The absence of clear explainability mechanisms contradicts international calls for transparent AI.
The AI tools operate over a massive database of judicial documents, many of which contain highly sensitive personal data. Although the ministry claims that data is anonymized and protected, the risk of re-identification in large datasets remains. Any data breach or misuse could contravene Turkey’s domestic data protection law (KVKK) and international standards like the General Data Protection Regulation (GDPR).
The deployment of AI in judicial settings raises the unresolved question of legal liability. If an AI system contributes to a wrongful judicial decision, it remains unclear whether the responsibility lies with the developer, the state or the individual judge. This legal ambiguity could leave victims without effective legal remedies, violating the right to an effective remedy under Article 13 of the European Convention on Human Rights (ECHR).
Projects like the “Ez Cümle” initiative, which generates automated summaries of legal texts, and others that propose standard reasoning in verdicts, could lead to over-mechanization of the judiciary. While efficiency is improved, it risks diminishing the depth and nuance of human judicial reasoning. Such systems might unintentionally reduce the judiciary to a procedural formality, running counter to the principles of human dignity and individual justice.
Some AI projects aim to forecast crime trends or predict organizational affiliations based on historical patterns. This mirrors predictive policing systems criticized in other countries for violating privacy and enabling racial profiling. Without strict legal safeguards, such practices may infringe on rights protected under Article 8 of the ECHR.
If AI significantly influences initial rulings or classifications without clear documentation of its logic, it could undermine the effectiveness of appeals. Defendants would face difficulties challenging decisions when the rationale is embedded in opaque algorithms, potentially violating fair trial guarantees.
The ministry emphasizes the existence of an internal ethics committee overseeing AI implementation, but this body operates within the same administrative structure that develops the tools. Independent oversight — an essential component of democratic accountability — is missing, limiting transparency and increasing the risk of abuse.
During the parliamentary committee meeting, Servet Gül, director general of information technologies at the Ministry of Justice, acknowledged the ministry’s limited technical capacity in AI development. He said the ministry’s Artificial Intelligence and Big Data Department currently consists of only 11 technical staff members. Although additional managerial personnel are involved, he admitted that this number is insufficient. “Is it enough? No,” Gül said, noting that while efforts are underway to expand the team by recruiting qualified individuals, the current resources remain constrained. He also emphasized that AI can produce beneficial results in the hands of well-intentioned people but warned of its potential misuse when operated by those with malicious intent.
Dr. Osman Gazi Güçlütürk, a faculty member at Galatasaray University, discussed artificial intelligence. He began by displaying the official definition of AI, which was published in the Official Gazette in July 2024. One of the most significant changes during this period was the definition itself. “I could ask 20 different engineers to review it, and they would all tell me that the definition does not encompass any of the systems,” Güçlütürk said. He warned that creating a regulation for AI focused on a single type and overly rigid could lead to undesirable outcomes.
A significant risk in Turkey is the security of data in public institutions. Information from institutions that frequently experience database breaches is often sold on dark web marketplaces. On September 12, 2024 Minister of Transport and Infrastructure Abdulkadir Uraloğlu confirmed that the personal data of 85 million people had been stolen during the pandemic. The minister acknowledged a leak from the healthcare system; however, after an intense backlash, he claimed there was a misunderstanding and that there was no data leak. In 2015 the Supreme Election Council was hacked, and voter information was stolen. In 2021 the Ministry of Agriculture and Forestry’s systems were also breached, resulting in the loss of all digital data. According to opposition claims, the system was restored after a ransom was paid to the hackers.
Turkey was ranked 117th out of 142 in the World Justice Project’s 2024 Rule of Law Index, dropping one rank from the previous year. The index measures rule of law based on factors such as corruption, fundamental rights and civil and criminal justice.
Minutes of the parliamentary committee meeting on April 8 in Ankara:
AI :Parliamentary committee