Скачать книгу

recommendations, micro-targeted ads, search results, risk predictions, etc. – controls and influences citizens, workers and consumers. Many critical scholars have argued that the widespread delegation of human choices to opaque algorithms results in a limitation of human freedom and agency (e.g. Pasquale 2015; Mackenzie 2006; Ananny 2016; Beer 2013a, 2017; Ziewitz 2016; Just and Latzer 2017). Building on the work of Lash (2007) and Thrift (2005), the sociologist David Beer (2009) suggested that online algorithms not only mediate but also ‘constitute’ reality, becoming a sort of ‘technological unconscious’, an invisible force orienting Internet users’ everyday lives. Other contributions have similarly portrayed algorithms as powerful ‘engines of order’ (Rieder 2020), such as Taina Bucher’s research on how Facebook ‘programmes’ social life (2012a, 2018). Scholars have examined the effects of algorithmic ‘governance’ (Ziewitz 2016) in a number of research contexts, by investigating computational forms of racial discrimination (Noble 2018; Benjamin 2019), policy algorithms and predictive risk models (Eubanks 2018; Christin 2020), as well as ‘filter bubbles’ on social media (Pariser 2011; see also Bruns 2019). The political, ethical and legal implications of algorithmic power have been discussed from multiple disciplinary angles, and with a varying degree of techno-pessimism (see for instance Beer 2017; Floridi et al. 2018; Ananny 2016; Crawford et al. 2019; Campolo and Crawford 2020).

      The notion of ‘feedback loop’ is widely used in biology, engineering and, increasingly, in popular culture: if the outputs of a technical system are routed back as inputs, the system ‘feeds back’ into itself. Norbert Wiener – the founder of cybernetics – defines feedback as ‘the property of being able to adjust future conduct by past performance’ (1989: 33). According to Wiener, feedback mechanisms based on the measurement of performance make learning possible, both in the animal world and in the technical world of machines – even when these are as simple as an elevator (1989: 24). This intuition turned out to be crucial for the subsequent development of machine learning research. However, how feedback processes work in socio-cultural contexts is less clear, especially when these involve both humans and autonomous machines. While mid-twentieth-century cyberneticians like Wiener saw the feedback loop essentially as a mechanism of control producing stability within complex systems, they ‘did not quite foresee its capacity to generate emergent behaviours’ (Amoore 2019: 11). In the words of the literary theorist Katherine Hayles: ‘recursivity could become a spiral rather than a circle’ (2005: 241, cited in Amoore 2019).

Скачать книгу