Скачать книгу

early Content Policy and Guideline documents contained a single sentence claiming that those who regularly host and upload sexual videos would be suspended. The 2012 update to Community Guidelines elaborated by setting two rules: users who “regularly post sexual or adult-oriented content” were asked to flag their blogs as “Not Suitable For Work (‘NSFW’),”7 and users are welcome to embed links to sexually explicit video, but should avoid uploading, because tumblr is “not in the business of profiting from adult-oriented videos and hosting this stuff is fucking expensive.” The call to self-label ushered in the first version of the so-called Safe Mode, where the content of the blogs, which had been self-tagged as NSFW, was filtered out from the dashboards and search results of those users who selected that option. In 2012, Karp went on record saying he is not “into moderating” NSFW content and that tumblr is “an excellent platform for porn,” which he does not “personally have any moral opposition to” (Cheshire 2012). After the sale to Yahoo! in 2013, tumblr started tinkering with the visibility of sexual content in what Gillespie (2018: 173) has described as an attempt on Yahoo!’s part to both let “tumblr be tumblr” as well as sell ads. When invited to comment on the matter by talk show host Stephen Colbert, Karp maintained that tumblr had taken a pretty hard line on freedom of speech, arguing that he did not want to “go in there to draw the line between” art and behind the scenes photos of “Lady Gaga and like, her nip” (Dickey 2013). The Community Guideline clauses regarding NSFW content remained the same throughout updates in 2015, 2016, and 2017, although a link to “report unflagged NSFW content” was added in the 2016 update (tumblr 2016). In 2017 a stricter Safe Mode was introduced. The new system was quite complex, filtering blogs that were self-, moderator-, or automatically labeled as NSFW from the external and internal search results of all non-logged-on users and all logged-on users who were under the age of 18 (see Chapter 6).

      Although many users hoped that the NSFW ruling would be reversed under Automattic, CEO Mullenweg refuted that hope by citing the app stores’ intolerance of NSFW content as the reason for the ban (Patel 2019). Sexually explicit content is still present on the platform, though its make-up and volume has changed. Based on our experiences, original visual content created by tumblr users themselves, often of themselves (see Chapter 6), is nearly gone. What remains is pornographic content: GIFs, videos, and still images from porn, which are much more explicit than selfies with female-presenting nipples ever were.

      Algorithms

      An early case of algorithmic imaginary (Bucher 2017) emerged after tumblr’s 2012 policy against self-harm blogs. We noticed vernacular techniques circulating for backup hashtags and otherwise circumventing the algorithms among some thinspo blogs (Kanai et al. 2020; see also Chapter 7). Users’ imaginaries of tumblr algorithms shifted more drastically with the 2017 Safe Mode, when algorithms were obviously and intrusively employed to filter content (see Chapter 6). Certain keywords, which returned results via browser, returned nothing on mobile apps because of app store restrictions. This included “#gay,” because the data that the filtering algorithm was trained on had determined that the hashtag often accompanied pornographic content, but the LGBTIQA+ community rightfully interpreted this as an outright attack. tumblr managed to placate users by reversing some of the changes, promising to work on more intelligent solutions for battling porn bots and filtering content, and primarily by demonstrating that they were listening. Their resolution of this particular governance conflict showed that they understood that moderation involves a politics of visibility (Gillespie 2018), which in the case of sexual self-expression often follows the fault lines of systematic marginalization (e.g., disenfranchising the LGBTIQA+ community).

Скачать книгу