Perceptual bias, then, describes the difference between the assumed “ways of seeing” of a machine vision system, our reasonable expectations regarding its way of representing the visual world, and its actual perceptual topology. We propose that machine vision systems are inherently biased not only because they rely on biased datasets (which they do) but also because their perceptual topology, their specific way of representing the visual world, gives rise to a new class of bias that we call perceptual bias.Ĭoncretely, we define perceptual topology as the set of those inductive biases in machine vision systems that determine its capability to represent the visual world. In the worst case, it increases trust in quick technological fixes that fix (almost) nothing, while systemic failures continue to reproduce. In the following, we argue that this focus on dataset bias in critical investigations of machine vision paints an incomplete picture, metaphorically and literally. In many critical investigations of machine vision, however, the focus lies almost exclusively on dataset bias (Crawford and Paglen 2019), and on fixing datasets by introducing more, or more diverse sets of images (Merler et al. As part of this development, machine vision has moved into the spotlight of critique as well, Footnote 1 particularly where it is used for socially charged applications like facial recognition (Buolamwini and Gebru 2018 Garvie et al. 2019), and also in disciplines such as African-American studies (Benjamin 2019), media studies (Pasquinelli and Joler 2020) and law (Mittelstadt et al. 2019) and science and technology studies (Selbst et al. The susceptibility of machine learning systems to bias has recently become a prominent field of study in many disciplines, most visibly at the intersection of computer science (Friedler et al. We conclude that dataset bias and perceptual bias both need to be considered in the critical analysis of machine vision systems and propose to understand critical machine vision as an important transdisciplinary challenge, situated at the interface of computer science and visual studies/ Bildwissenschaft. We show how perceptual bias affects the interpretability of machine vision systems in particular, by means of a close reading of a visualization technique called “feature visualization”. Concretely, we define perceptual topology as the set of those inductive biases in machine vision systems that determine its capability to represent the visual world. We propose that machine vision systems are inherently biased not only because they rely on biased datasets but also because their perceptual topology, their specific way of representing the visual world, gives rise to a new class of bias that we call perceptual bias. DICOM (version 3.In many critical investigations of machine vision, the focus lies almost exclusively on dataset bias and on fixing datasets by introducing more and more diverse sets of images.TECHNICAL DATASHEET Simpleware ScanIP Release Version T-2022.03 MaImport Formats
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |