Alex Stamos, ofițerul șef de securitate de la Facebook, s-a dezlănțuit pe Twitter împotriva criticilor care acuză rețeaua socială că nu elimină la timp știrile false ce sunt distribuite.
Este foarte dificil să identifici ştirile false şi propaganda folosind doar algoritmi; cine doreşte asta nu înţelege abuzul de sistem; cine doreşte asta nu înţelege abuzul de sistem, a acuzat pe Twitter Alex Stamos, conform cyberm.ro.
Nemulțumirea ofițerului vine din criticile jurnaliștilor privind decizia Facebook de a modera manual anunțurile sponsorizate bazate pe criterii de "politică, religie, etnie sau probleme social" înainte de a intra live în feed-ul utilizatorilor.
Ofiţerul şef de securitate al Facebook s-a dezlănţuit pe Twitter cu o serie de 18 mesaje de până la 140 de caractere în care a atacat lipsa de documentare a jurnaliştilor în ceea ce priveşte utilizarea tehnologiei, a inteligenţei artificiale şi a automatizărilor pentru identificarea ştirilor false şi a propagandei, pe baza datelor de instruire părtinitoare ideologic.
I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos.
Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks.
In fact, an understanding of the risks of machine learning (ML) drives small-c conservatism in solving some issues.
For example, lots of journalists have celebrated academics who have made wild claims of how easy it is to spot fake news and propaganda.
Without considering the downside of training ML systems to classify something as fake based upon ideologically biased training data.
A bunch of the public research really comes down to the feedback loop of „we believe this viewpoint is being pushed by bots” -> ML
So if you don’t worry about becoming the Ministry of Truth with ML systems trained on your personal biases, then it’s easy!
Likewise all the stories about „The Algorithm”. In any situation where millions/billions/tens of Bs of items need to be sorted, need algos
My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.
And to be careful of their own biases when making leaps of judgment between facts.
If your piece ties together bad guys abusing platforms, algorithms and the Manifestbro into one grand theory of SV, then you might be biased
If your piece assumes that a problem hasn’t been addressed because everybody at these companies is a nerd, you are incorrect.
If you call for less speech by the people you dislike but also complain when the people you like are censored, be careful. Really common.
If you call for some type of speech to be controlled, then think long and hard of how those rules/systems can be abused both here and abroad
Likewise if your call for data to be protected from governments is based upon who the person being protected is.
A lot of people aren’t thinking hard about the world they are asking SV to build. When the gods wish to punish us they answer our prayers.
Anyway, just a Saturday morning thought on how we can better discuss this. Off to Home Depot. FIN