Скачать книгу

such as Google, Amazon and, beginning in the mid 2000s, YouTube, Facebook and Twitter, lie at the core of this historical transition toward the datafication and algorithmic ordering of economy and society (Mayer-Schoenberger and Cukier 2013; van Dijck 2013; Zuboff 2019).

      Furthermore, while in the Digital Era algorithms were commercially used mainly for analytical purposes, in the Platform Era they also became ‘operational’ devices (A. Mackenzie 2018). Logistic regressions such as those run in SPSS by statisticians in the 1980s could now be operationally embedded in a platform infrastructure and fed with thousands of data ‘features’ in order to autonomously filter the content presented to single users based on adaptable, high-dimensional models (Rieder 2020). The computational implications of this shift have been described by Adrian Mackenzie as follows:

      if conventional statistical regression models typically worked with 10 different variables […] and perhaps sample sizes of thousands, data mining and predictive analytics today typically work with hundreds and in some cases tens of thousands of variables and sample sizes of millions or billions. The difference between classical statistics, which often sought to explain associations between variables, and machine learning, which seeks to explore high-dimensional patterns, arises because vector spaces juxtapose almost any number of features. (Mackenzie 2015: 434)

Скачать книгу