X launches account verification based on government ID


A bug on X, formerly Twitter, was causing numerous posts over the weekend to be flagged as “Sensitive Media,” thwarting the company’s own attempts to make its platform more approachable to advertisers. According to the X Safety account in a post, the issue has now been fixed and the team is working on removing the labels from impacted posts.

“Sensitive media” is a label X uses to denote content that others may not wish to see, like violence or nudity. X asks its users who want to regularly post such items, to adjust their media settings to appropriately mark their images. In addition, there’s an option to add a one-time sensitive content warning to photos and videos across X on iOS, Android, and the web. Posts can be marked as nudity, violence, or just “sensitive” to restrict viewing behind a blurred content warning that requires an additional click or tap to view the media.

However, X users were finding that even innocuous photos and media were being marked as “sensitive” — something the company itself can do, if it reviews reported items and then chooses to add the label to the media to protect users. In the days before Twitter’s acquisition by Elon Musk, a combination of automation and human review (members of the company’s trust and safety team) would act on these reports to make the decision.

In this most recent event, the issue may have been the work of a spam bot, according to a post by Musk, which somewhat contradicts the X Safety announcement.

On Sunday, Musk wroteAn X spam/scam bot accidentally flagged many legitimate accounts today. This is being fixed.” An hour later, he reposted the message from the X safety team which referred to the issue as a bug. 

The issue is the latest misstep at X as it attempts to figure out a new monetization strategy after advertisers fled the service under Musk. Speaking at an event in November, X owner Elon Musk told advertisers to “go f*ck yourself,” when questioned about big brands’ decision to pause advertising on X over concerns about antisemitic content on the platform. The comapny was later said to be pursuing small-to-medium-sized advertisers in the meantime as it moves forward with its strategy to roll out AI and peer-to-peer payments in 2024. 

The issue could also have been worsened by X’s workforce reductions, which impacted its trust and safety team — the team that typically reviewed accounts for spam and sensitive content.

Bots flagging accounts is not the only area where X has faced issues with increased spam in recent months. A search for the phrase, “I’m sorry, I cannot provide a response as it goes against OpenAI’s use case policy,” on X recently revealed many automated accounts were masked as regular users and, in fact, many of them were paying X Premium users. Musk had believed that implementing a small fee would help rid the platform of spam, but this indicated that at least some bots were willing to pay to look human. The company also admitted last summer it had a Verified spammer problem when it announced new DM settings that would move messages from Verified users out of your inbox — another indication that X’s Verification system was not weeding out spammers, as hoped.





Source link