Content moderation is difficult work for any social media company. Every day millions of posts and messages are shared on these platforms, most are benign in nature but as with anything there will be abusive, hateful, and sometimes violent content shared or promoted by certain individuals and organizations. Most social media companies expect their users to engage on these platforms within a certain set of rules or community standards. These content policies are often decided upon with careful and studied reflection on the gravity of moderation in order to provide a safe and appropriate place for users. It is an admittedly difficult and thorny ethical issue though because social media has become such a massive and integral part of our diverse society, not to mention the hyper politicization of such issues.
Facebook announced last Monday it would be sending a pop-up notification to iOS users, asking for permission to track their activity so they can be targeted with ads. The pop-up tells users that allowing trackers will mean they “get ads that are more personalized” and “support business that rely on ads to reach customers.”
Internet disruptions in Myanmar last Monday morning coincided with reports that top politicians, including the country’s de-facto leader Aung San Suu Kyi, were being rounded up by the military. That’s no surprise: internet blackouts are now common around the world when power hangs in the balance.
“Fake Famous” offers a novel window into the world of influencers, conducting an experiment to see if three young wannabes can be transformed into marketing dynamos. While their tales don’t unfold entirely as planned, the HBO documentary exposes how ripe for manipulation this whole culture is, and the powerful incentives to game the system.
TikTok will start displaying warnings on videos that contain questionable information that couldn’t be verified by fact-checkers, and it’ll begin warning users when they go to re-share those videos that the information hasn’t been confirmed. The app will now display a warning label on these videos that reads, “Caution: Video flagged for unverified content.”
Suffering attacks from Democrats and Republicans alike, Facebook has sought to distance itself from the issue by deferring some content-moderation decisions to an independent oversight board and embracing government involvement in content moderation. Now, Twitter is taking this uncoupling of ownership and control one step further, piloting a system called Birdwatch in which selected people serve as the “community’s voice” to “identify information in tweets they believe is misleading and write notes that provide informative context.”