ТОП просматриваемых книг сайта:
Tumblr. Crystal Abidin
Читать онлайн.Название Tumblr
Год выпуска 0
isbn 9781509541102
Автор произведения Crystal Abidin
Жанр Кинематограф, театр
Издательство John Wiley & Sons Limited
Finally, it is important to understand how the “reply” function differs from the typical (public) commenting feature. David Karp has expressed a view that comments bring out the worst in people (Walker 2012), so he purposefully omitted that functionality, articulating reblogs, fan mail, or replies as its substitutes in various interviews. Replies were introduced as beta in 2010 and were generally loved by users. They were discontinued in 2015 to roll out messaging, but brought back in 2016. Bloggers can choose whether they want to enable replies. In the early years, a blogger had to turn replies on from the settings menu. Now, a user can select who they want to be able to reply to their posts (everyone, those whom one follows, or those who have been following one for at least a week). Replies are only available to logged-in users via the dashboard. tumblr’s Help page describes replies as a “way of responding to a post that’s more specific than a like, less of a commitment than a reblog, and more public than a message” (tumblr Help Center 2020b).
Governance
While the features and functions of tumblr tell us about the platform at a level of the interface, tumblr’s governance tells us how the platform is structured more broadly, and how and what Tumblr Inc. thinks users should be able to do. Platforms are governed by laws, regulations, general industry logics, and their owners’ vision, but they also govern us, their users, by setting explicit rules, making design and functionality choices (e.g., defaults and mandatory fields in profiles), and policing our behavior for compliance (see van Dijck 2013; Gillespie 2018; Light et al. 2018). The central piece of legislation governing American-owned social media platforms, including tumblr, has been Section 230 of the US Communications Decency Act (CDA 230). It states that internet intermediaries – including social media platforms – are not liable for their users’ harmful speech, yet are allowed to regulate it as they see fit, without losing this “safe harbor” from liability.3 While CDA 230 has consistently been heralded as the cornerstone of internet innovation and free speech, it has also been critiqued for letting social media platforms off the hook on a false premise that they are not shaping, amplifying, and suppressing content for profit (Marwick 2017). Here, we discuss how participants, practices, and content are moderated on tumblr. We follow this with a brief discussion on how algorithms are used in content moderation.
Moderating participants and practices
Between 2007 and 2012, tumblr’s Terms of Service were, as is common for social media platforms, written in impenetrable legalese. Back then, a “subscriber” was described as someone at least 18 years of age. In 2012, tumblr added humorous, accessible “translations” to sections of their Terms of Service agreement and their Community Guidelines. The age of eligibility was lowered to 13, and the following explanation was added: “You have to be at least 13 years old to use Tumblr. We’re serious: it’s a hard rule, based on U.S. federal and state legislation, even if you’re 12.9 years old. If you’re younger than 13, don’t use Tumblr. Ask your parents for an Xbox or try books.” By 2020, tumblr’s rules regarding age depended on the users’ location, echoing the differences in law. tumblr users thus now have to be at least 13, or at least 16 if they live in the EU (there is flexibility depending on the data-processing consent age limits in particular European countries), and at least 18 to access blogs self- or platform-flagged as “explicit”4 (tumblr Help Center 2020c).
tumblr has never made any prescriptions about usernames beyond stating, since the 2012 update to the Community Guidelines, that “Tumblr’s URLs (usernames) are for the use and enjoyment of our users” and should not be hoarded, traded, or sold, nor registered for the purpose of impersonating someone. The 2012 accessible translation added to this that, “if you want to parody or ridicule a public figure (and who doesn’t?), don’t try to trick readers into thinking you are actually that public figure” (Community Guidelines update 2012). Setting up an account has only ever asked for a functioning email address and your age; tumblr has always accepted pseudonymity. The 2012 Community Guidelines also introduced “non-genuine social gesture schemes” (artificially enhancing one’s follower count), “mass registration and automation,” “unauthorized sweepstakes or giveaways,” as well as fraud and phishing into the list of “What tumblr is not for.”
Moderating content
Most social media platforms prohibit or limit representations of sex, pornography, violence, obscenity, self-harm, and illegal activities, and posting content that functions as hate speech and harassment (Gillespie 2018). Of course, how stringently different platforms police the adherence to this list varies quite a bit. In 2012, tumblr staff posted plans for revising their Content Policy “against self-harm blogs” and proposed the removal of “active promotion of self-harm,” including content that glorified or recommended self-injury, suicide, or eating disorder techniques. Users were invited to provide feedback on the policy change and the response was intense, immediate, and conflicting. In less than a week, the post received more than 25,000 notes (staff 2012a). Responses conveyed the move as, variously, stupid and dangerous; unfair (“what about other blogs that promote alcohol and drugs?,” “It’s still okay to have a racist blog on Tumblr”); exclusionary (“will target primarily women”); unproductive and potentially harmful (“some things need to be talked about”); well-intentioned but misguided (“taking away another safe space”); urgently needed and smart; and impractical (“where does Tumblr plan to draw the line between what is acceptable and what is not?”) (staff 2012a). In their follow-up post, Tumblr Inc. seemed to have consulted with the National Eating Disorder Association and taken some of the user feedback on board, as they promised to find a balance between removing content, but keeping tumblr a place “where people struggling with these behaviors can find solace, community, dialog, understanding, and hope” (staff 2012b). Unlike Instagram, and perhaps as part of this promise, tumblr did not remove particular tags and started, instead, showing a PSA (“public service announcement,” i.e., information on resources and support organizations) asking “Everything okay?” on search results for particular keywords and hashtags (see Figure 1.45). While the impulse is admirable, the reality of the situation is much more complex. Users have dynamic and ever-developing techniques for circumventing hashtag moderation, and platforms’ automated recommendation systems still circulate self-harm content (Gerrard 2018). Clicking through the PSA and behaving on the platform as interested in self-harm will result in tumblr suggesting self-harm blogs to follow.
Figure 1.4: The public service announcement (PSA) returned when one searches for “proana” on tumblr. Screengrab by authors.
Scholars focused on sexual social media have remarked that American-owned platforms seem to presume that, in the list of offenses we started this section with, sexually explicit content will deter advertisers the most (see Paasonen et al. 2019; Tiidenberg and van der Nagel 2020).6 It is perhaps unsurprising that just as David Karp’s attitudes toward advertising differed from many of its competitors (see Chapter