May 1, 2024

Athens News

News in English from Greece

Study: Internet freedom declines for 13th year in a row


Freedom House’s annual report “Freedom on the Internet”, published recently, talks about a decline in the level of freedom on the Internet, and all “thanks” to artificial intelligence.

Russian service of the Voice of America tellsthat the report’s authors emphasize the development of AI technologies that effectively help governments carry out Internet censorship accurately and more subtly:

  • promptly monitor signs of dissent on social networks;
  • identify and punish “dissidents”;
  • effectively spread disinformation.

The report’s authors studied 70 countries, which account for 88% of the world’s Internet users – about 4.9 billion people. The world’s worst conditions for Internet freedom have consistently been recorded in China for the ninth year in a row. Myanmar is in second place. Among the CIS countries, the positions in the Internet freedom ranking are as follows:

  • Georgia 11th position;
  • Armenia 16;
  • Ukraine 31;
  • Kyrgyzstan 36;
  • Azerbaijan 51;
  • Kazakhstan 53;
  • Belarus 62;
  • Uzbekistan 64;
  • Russia 66.

In 55 of the 70 countries studied, people faced legal consequences for expressing their opinions online. People in 41 countries have been physically harassed or even killed for their online speech.

The report cites Iran and Myanmar as examples, where authoritarian regimes “carried out death sentences against individuals convicted of expressing opinions online.” Also mentioned in this regard are Belarus and Nicaragua, where people received harsh prison sentences for speaking out online: “a core tactic of dictators Alexander Lukashenko and Daniel Ortega in their violent campaigns to maintain power.”

The authors of the report did not fail to note the growth in the use of AI to strengthen censorship by the most technologically advanced authoritarian states. At least 22 countries have legal frameworks that require or incentivize the use of artificial intelligence to remove unfavorable political, social and religious speech.

AI technologies are widely and actively used as a tool of disinformation. The report notes that AI tools capable of generating text, sound and images are constantly improving, becoming more accessible and easier to use. Over the past year, new technologies have been used in sixteen countries to sow public doubt, smear opponents, or influence public debate.

The report mentions Operation Double, during which Russian state or state-linked entities imitated German, Italian, American, British and French media by spreading false and conspiracy theories about European sanctions and Ukrainian refugees.

Another example is Venezuelan state media’s use of social media to produce and distribute videos of news anchors from a defunct international English-language channel spreading pro-government ideas.

AI-manipulated content has also been used to smear opponents in elections in the United States. Accounts linked to the campaigns of former President Donald Trump and Florida Gov. Ron DeSantis posted videos with AI-generated content to undermine each other’s image. Similarly, in February 2023, an AI-generated manipulative video of Joe Biden making transphobic comments quickly spread on social media. It is believed to have been created to discredit Biden among voters who support transgender rights.

The Freedom House report notes that even if the unreliability of AI-generated information becomes obvious and quickly exposed, the very fact of its creation can influence the information space. It undermines public trust in democratic processes, encourages activists and journalists to self-censor, and silences credible sources of information.

The report’s authors also note that many companies, such as OpenAI and Google, have installed protections to reduce the overt malicious use of their chatbots. But attackers managed to overcome the defense mechanisms and generate false, discriminatory or offensive texts. According to researchers, the danger of AI-based disinformation campaigns will increase as attackers develop new ways to bypass security mechanisms.

In an interview with the Voice of America Russian service, Grant Baker, a technology and democracy research analyst at Freedom House and one of the authors of the report, noted:

“There has been a lot of hope lately that tech companies will be able to regulate content themselves. We advocate for the need for a regulatory framework that governs the creation and use of AI, protects human rights online, and increases transparency in the development and use of such systems.”

The analyst believes that such a legal framework could give regulators and the judiciary the opportunity to exercise independent oversight of AI developments in order to reduce the potential harm from its use.



Source link

Verified by MonsterInsights