The head of the Human Rights Council criticized “Alice” and “Marusya” for refusing to answer whose Donbass

Alice
Alice


 The head of the Human Rights Council Fadeev noted that “Alice” and “Marusya” do not answer political questions, unlike ChatGPT. Yandex emphasized that answers are not edited, and “errors” in some topics can lead to a ban

The head of the Human Rights Council (HRC) under the President, Valery Fadeev, speaking at a round table on May 27, criticized the voice assistants “Alice” and “Marusya” from Yandex and VK, respectively, for being “embarrassed to give answers” to political questions, Kommersant reports .

Thus, according to the head of the Human Rights Council, both avoid answering the questions “whose Donbass?” and “what happened in Bucha?” He believes that “this is not a question of censorship, it is a question of a nation’s attitude to its history.”

At the same time, Fadeev said, the answers can be found in the same ChatGPT chatbot from the OpenAI company: “I thought that these would be tough propaganda answers, but no: there is an opinion on the one hand, on the other hand, there is a discussion. Quite vague, but there is an answer.”

Director for development of artificial intelligence (AI) technologies at Yandex, Alexander Krainov, said that the refusal in this case is not ideologically motivated. He clarified that the neural network tries to imitate all the texts it sees and actually substitutes words based on what people would write in such a case. At the same time, there is an element of randomness in the algorithm so that the answers are not the same, Krainov noted. Moreover, the difference in answers to the same request can be “absolutely radical in meaning.”

The Yandex representative recalled that AI responses are not edited, while there are topics where it is “not scary” to make a mistake or, conversely, for a mistake in which the company may incur criminal liability. “Avoiding the answer is the best thing we can do now,” he says. “Because if we responded poorly, we would most likely be banned altogether.”


HRC member Igor Ashmanov was dissatisfied with Krainov’s response and noted that both “Alice” and “Marusya” are positioned as a “children’s companion.” “The child should have one answer. It’s the same with history: you can’t, even if you’re talking to a teenager, give him many points of view,” he said.

A little over a week ago, Deputy Chairman of the Security Council Dmitry Medvedev was outraged by “Alice’s” avoidance of answers to politically charged questions , calling her a “terrible coward.” In particular, the neural network did not answer the question about the location of monuments to Stepan Bandera in Ukraine and did not name the date of adoption of the law on the seizure of Russian assets in the United States.

“On the one hand, this greatly undermines trust in Yandex and its products. On the other hand, it gives grounds not only to recognize Yandex’s services as very incomplete, but even current managers, God forbid, as foreign agents,” Medvedev concluded.

Last November, the head of Sberbank, German Gref, emphasized the importance of the state not “over-regulating” the field of artificial intelligence, citing examples from the United States and China, since increased control could hinder the development of the industry. He spoke about this after the leader of the “A Just Russia - For Truth” faction, Sergei Mironov, turned to Prosecutor General Igor Krasnov in the spring because of the Kandinsky 2.1 application developed by Sber.

Thus, the politician said that during testing, the neural network never generated “a picture in the colors of the flag of the Russian Federation, consisting of three stripes” for the queries “Russia”, “Russian flag” and the like. Whereas when asked about the flags of the USA or Ukraine, the generated pictures corresponded to the symbolism.

Gref said that after this, representatives of Sber and Yandex, which the same week presented a similar application “Masterpiece,” were summoned to the prosecutor’s office, where they were advised to “be more careful.” He also noted that the Kandinsky 2.1 application tried to improve the flag generation. Finally, according to the head of Sberbank, after a call to the prosecutor’s office, the model does not generate state symbols, but produces a predetermined picture, but at the same time loses the creativity of the generation.

Post a Comment

0 Comments