Technologies new blogTechnologies new blog

Twitter says crowdsourced fact-checking system updated to better address ‘low quality’ contributions

The algorithm change involves scoring notes where contributors explain why a tweet shouldn’t be deemed misleading. Twitter had earlier paused scoring these types of notes because the rating data was “noisy,” the company said in a series of tweets posted on Friday night. However, it found these notes could still be low quality and “annoying to contributors,” so it’s now resuming scoring these notes, aided by other recent changes that help it to identify the different types of notes. This latest update will better identify and lock out more contributors who aren’t writing helpful content, Twitter said. The company itself is not determining the note quality, to be clear. Twitter VP of Product, Keith Coleman, clarified in a tweet that “low-quality” notes are rated as such if a “wide range of people” — including those who typically disagree with one another — all agree a particular note is not helpful. “This prevents one-sided outcomes,” he explained. The update follows a series of advertiser exits from Twitter as new Twitter owner Elon Musk promotes community-based moderation as the future of the platform. Given Twitter makes the majority of its money from ads, it’s unclear how long Twitter will be able to sustain itself with the reductions in revenue. Musk, too, is clearly concerned — even today publicly shaming Apple for its decision to pause adverting by asking if they “hate free speech in America,” he tweeted. 1f447-11f3311f6001f64c 1f331-11f600-11f64c-1 Birdwatch, as Twitter’s crowdsourced fact-check system was previously called, rebranded to “Community Notes” shortly after Musk took ownership of Twitter, and is something the new CEO sees as key to the future of Twitter’s moderation. Musk has been highly critical of Twitter’s former content moderation efforts, which he saw as an overreach. Teams engaged in content moderation were also a sizable part of Twitter’s massive layoffs earlier this month, and were again cut in mid-November when Twitter eliminated a large number of contractor positions. Community Notes takes a different approach to content moderation by putting much of those efforts in the hands of Twitter’s user base. The system is not as simple as having content upvoted or downvoted for accuracy — an algorithm that could easily be gamed if brigades of like-minded contributors teamed up to promote their own viewpoints. Instead, Community Notes uses a “bridging” algorithm that attempts to find consensus among people who don’t usually share the same views. To become a contributor, users must first prove they’re capable of writing helpful “notes” by correctly assessing other notes as either Helpful or Not Helpful, which earns them points. Users start with a rating impact score of zero and have to reach at least a 5 to become a contributor, Twitter previously explained. After reaching contributor status, users must then continue to add quality contributions or they will have their contributor status removed. The original idea behind Community Notes was to create a system that would add a layer of fact-checking and context to tweets that don’t necessarily violate Twitter’s rules. But in the Musk era, Community Notes may play an even larger role as Twitter now employs far fewer moderators following its layoffs. Despite being designed to look for consensus, as more Twitter users flee to other platforms — like Mastodon, CounterSocial, Hive, Post, Tumblr and others — Twitter may lose access to potential contributors willing to do this kind of work. In that case, the “crowd” may not represent the voice of the wider public — much like how Wikipedia is open to editing by all, but most of it is ultimately written by only 1% of editors. In addition, if Twitter’s user base overall begins to largely lean to one side more than another — more conservative than liberal, e.g. — a bridging algorithm could become less useful in representing a true consensus. Just ahead of the U.S. midterms (and Musk’s acquisition of Twitter, as it turned out), Community Notes, then called Birdwatch, expanded in the U.S., allowing its notes to become visible to all U.S. users. The company said at the time it would add around 1,000 more contributors per week, on top of its 15,000 pilot testers. It’s not clear how many people actually write Community Notes now, how often, or when the system will be open for sign-up to all of Twitter’s global users — and Twitter no longer has a comms team to field such questions. In more recent days, Musk has been touting this community fact-check system to advertisers who are concerned about the potential for increased misinformation, disinformation, and other toxic content on the platform in light of Musk’s “free speech” agenda. In a call with advertisers on Nov. 9, the exec referred to Community Notes as “epic” and a “gamechanger,” and something that would ultimately help improve the accuracy of what’s said on Twitter. Musk himself has been corrected by the community fact-check system, though he also often just deletes tweets rather than face the repercussions of being wrong. Many advertisers, however, don’t seem convinced that crowdsourced moderation will make Twitter a safe place to promote their brands. Several big advertisers have already pulled out, including General Mills, Audi and Pfizer, as well as automakers like General Motors. (Though the latter is more concerned about advertising on a site owned by a direct competitor, as Musk is also Tesla’s CEO). A report last week by The Washington Post also found that more than a third of Twitter’s top 100 clients had not advertised on the platform in the past two weeks — an indication that brands likely need more assurances of platform safety than something like Community Notes can provide. Twitter says crowdsourced fact-checking system updated to better address ‘low quality’ contributions by Sarah Perez originally published on TechCrunch

Source

Press ESC to close