Artificial Intelligence

NAMFREL Position Paper on the Use of Artificial Intelligence in Elections and Election-Related Activities

Artificial intelligence (AI) is here to stay and it is finding applications in many areas including elections. While AI’s potential for improving electoral processes is recognized, what has been highlighted is the misuse and abuse of the technology as seen in recent elections worldwide.

In Indonesia, the use of AI and deepfakes rose to prominence ahead of the Presidential elections held in February 2024.1 AI-generated deepfake videos made inroads in the campaign scene ahead of the March 31, 2024 elections in Turkey.2 129 election-related deepfakes were uncovered ahead of the National Assembly Elections in South Korea held in April 2024.3

The abuse of AI in elections had prompted the Commission on Elections (COMELEC) to call on legislators to pass a law banning the use of AI and deepfakes with the 2025 midterm elections less than a year away.

The National Citizens’ Movement for Free Elections (NAMFREL) has expressed opposition to a ban on the use of AI in elections for the following reasons:

  1. It may curtail technological innovation and inadvertently limit the benefits of AI in enhancing electoral processes;
  2. AI technologies are rapidly evolving which may make regulations ineffective;
  3. Banning or regulating the use of AI may infringe on freedom of speech and expression;
  4. COMELEC may face challenges in enforcing a law banning AI or regulating it as it requires expertise and a new set of skills to implement and enforce the law.

NAMFREL instead recommends the COMELEC to draft a Code of Conduct that will embody a set of ethical principles that all election stakeholders will be asked to adhere to. The set of principles were discussed in a roundtable discussion held on June 26, 2024 at the University of Asia and the Pacific (UA&P), participated in by representatives from election monitoring organizations, the information technology industry, A.I. subject matter experts, the academe, and the Commission on Elections.

Principle 1: Transparency
The use of AI in the generation of election-related content, including political advertisement,must be disclosed and such election-related content material appropriately marked. The disclosure must include funding sources, expenditures, AI technology used, data about the target audience and the source of such data. Transparency should extend across the AI ecosystem, from content creation to audience targeting, with social media platforms actively participating and adhering to a Code of Conduct.

Principle 2: Respect for Human Rights
AI-generated content must not infringe on the suffrage, digital, and privacy rights of individuals. While harmful AI use may be penalized, a balance with free speech is essential, supported by mechanisms to address AI grievances promptly and inform people about potential rights violations by AI-generated content.

Principle 3: Accountability
Candidates and political parties should register their intention to use AI in campaigns and be open to auditing their AI-generated content. Legal liabilities and penalties should apply to candidates, political parties, campaign teams and public relations and advertising firms that caused the development or generation of AI-generated content. Shared accountability is crucial due to the challenges in detection and monitoring by election management bodies alone.

Principle 4: Truthfulness
AI-generated content must uphold data integrity, with social media platforms actively moderating election-related content. Ensuring truthfulness involves candidates, political parties, and media, supported by a clear source of truth and mechanisms for fact-checking information.

Principle 5: Fairness and Non-discrimination
AI-generated content must be subject to review to detect discrimination based on race, gender, age, socio-economic status, religion, or other protected characteristics, with safeguards in place to prevent such biases. Any such AI-generated content that exhibits discrimination must not be published.

Principle 6: Oversight by COMELEC
The COMELEC, in the exercise of its oversight function, may establish a committee or task force to monitor the use of AI in the generation of election-related content, including AI-generated political advertising with focus on detecting misinformation, disinformation, and deepfakes. COMELEC should implement a reporting and complaint process, regulating AI-generated election paraphernalia (AI-GEP). COMELEC can encourage candidates, political parties and other stakeholders to adopt self-regulation mechanisms for AI use in elections and election-related activities. ###

Download the full position paper:NAMFREL Position Paper on the Use of Artificial Intelligence in Elections and Election-Related Activities

NAMFREL-Position-Paper-on-the-Use-of-Artificial-Intelligence-In-Elections-and-Election-Related-Activities-July-16-2024


Read More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button