Prof. David Berry

University of Sussex

David Berry is Professor of Digital Humanities (Media and Film) at the University of Sussex. His research is focussed on the theoretical and medium-specific challenges of understanding digital and computational media, particularly algorithms, software and code. Currently he is exploring the way in which artificial intelligence and machine-learning are articulated in relation to arts and humanities knowledges. He has published many books and edited volumes, including Digital Humanities: Knowledge and Critique in A Digital (together with Anders Fagerjord, 2017), The Philosophy of Software: Code and Mediation in the Digital Age (2016), Postdigital Aesthetics: Art, Computation and Design (2015) and Critical Theory and the Digital (2014).

 

Explainability and the Digital Condition

Abstract

This talk argues that to understand the digital condition we need to examine the explanatory deficit in modern societies that emerges from a reliance on automated decision systems. The challenge of new forms of social obscurity from the implementation of technical systems is heightened by the example of machine learning systems that have emerged in the past decade. As a result, an important critique of computational opaqueness called “explainability” has emerged. We see this used, for example, in seeking explanations for facial recognition technologies classifications, public unease with algorithmic judicial systems and other automated decision systems. Explainability is a key new area of research within the fields of artificial intelligence and machine-learning (xAI) and requires a computational system to be able to provide an explanation for a decision that has been made. I explore the notion of explainability to understand how it helps identify the ethical and interpretative lacuna around machine-learning, and why it seeks to solve it by means of explanatory responses generated by the technology itself. However, I argue that the idea of a technological response to an interpretability problem is doomed to failure while explainability is understood through such narrow technical criteria. In this paper, therefore, I seek to widen its applicability by connecting explainability to ideas of explanatory publics, understanding and digital literacies.

Prof. Dominique Cardon

Médialab of Sciences Po

Dominique Cardon is professor of sociology and director of the Médialab at Sciences Po in Paris. He is working on the transformation of the public space and the uses of new technologies. He has published widely on the place of new technologies in the no-global movement, alternative media and on the process of bottom-up innovations in the digital world. His recent research focuses on the analysis of the power of algorithms in the classification of digital information. His work seeks to articulate the sociology of science and technology with a sensitive approach to the transformations of contemporary social worlds. He is currently working on the social effects of the generalization of machine learning techniques in an ever-increasing number of situations of everyday life. He has published La démocratie Internet (with Fabien Granjon, 2010), Médiactivistes (with Antonio Casilli, 2010), Qu’est-ce que le digital labor ? (2015) and Culture numérique (2019).

 

Society2Vec: From Categorical Prediction to Behavioral Traces

Abstract

Since 2010, machine learning based predictive techniques, and more specifically deep learning neural networks, have achieved spectacular performances in the fields of image recognition or automatic translation, under the umbrella term of “Artificial Intelligence”. But their filiation to this field of research is not straightforward. In the tumultuous history of AI, learning techniques using so-called “connectionist” neural networks have long been mocked and ostracized by the “symbolic” movement. This lecture retraces the history of artificial intelligence through the lens of the tension between symbolic and connectionist approaches. From a social history of science and technology perspective, it seeks to highlight how researchers, relying on the availability of massive data and the multiplication of computing power have undertaken to reformulate the symbolic AI project by reviving the spirit of adaptive and inductive machines dating back from the era of cybernetics.

The hypothesis behind this lecture is that the new computational techniques used in machine learning provide a new way of representing society, no longer based on categories but on individual traces of behaviour. The new algorithms of machine learning replace the regularity of constant causes with the “probability of causes”. It is therefore another way of representing society and the uncertainties of action that is emerging. To defend this argument, this communication will propose two parallel investigations. The first, from a science and technology history perspective, traces the emergence of the connexionist paradigm within artificial intelligence techniques. The second, based on the sociology of statistical categorization, focuses on how the calculation techniques used by major web services produce predictive recommendations.

Prof. Lina Dencik

Cardiff University

Lina Dencik is Professor at Cardiff’s School of Journalism, Media and Culture and Co-Founder/Director of the Data Justice Lab – a leading hub for data justice topics globally. Her research concerns the interplay between media developments and social and political change, with a particular focus on resistance, governance and the politics of data. She has published five books including Media and Global Civil Society (2012), Worker Resistance and Media: Challenging Global Corporate Power in the 21st Century (co-authored with Peter Wilkin, 2015), Critical Perspectives on Social Media and Protest: Between Emancipation and Control (co-edited with Oliver Leistert, 2015) and Digital Citizenship in a Datafied Society (co-authored with Arne Hintz and Karin Wahl-Jorgensen, 2018). Her fifth book The Media Manifesto (with Natalie Fenton, Des Freedman and Justin Schlosberg) was published in 2020.

 

Situating Data Politics: From Ethics to Justice

Abstract

Digitally monitoring, tracking, profiling and predicting human behaviour and social activities is what underpins the information order often described as surveillance capitalism. More and more, it is also what helps determine decisions that are central to our ability to participate in society, such as welfare, education, crime, work, and if we can cross borders. How should we understand what is at stake with such developments? Often, we are dealt a simple binary that suggests that the issue is one of increased (state-)security and efficiency on the one hand and concerns with privacy and protection of personal data on the other. Recently, we have also seen a growing focus on questions of bias, discrimination and ‘fairness’ enter this debate. In this talk, I take stock of these concerns and present research that examines the implementation of data-driven systems in practice across pertinent sites of governance. I make the case that rather than focusing on the data system itself, we need to understand data systems as part of broader societal transformations, placing much greater emphasis on why these technologies are developed and implemented in the first place and how they relate to a wider political economy in order to grapple with the politics of data in full.

Prof. N. Katherine Hayles

Duke University, University of California

N. Katherine Hayles teaches and writes on the relations of literature, science and technology in the 20th and 21st centuries. Her print book, How We Think: Digital Media and Contemporary Technogenesis, was published by the University of Chicago Press in spring 2012. Her other books include How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics (1999), which won the Rene Wellek Prize for the Best Book in Literary Theory for 1998-99, and Writing Machines (2002), which won the Suzanne Langer Award for Outstanding Scholarship. She is Professor and Director of Graduate Studies in the Program in Literature at Duke University, and Distinguished Professor Emerita at the University of California, Los Angeles.

 

Beyond Consciousness: Reconceiving Meaning in the Computational Era

Abstract

Meaning has traditionally been deeply associated with consciousness, and more specifically with human consciousness. But the multiple crises that go by the name of the Anthropocene, including global warming and the Sixth Mass Extinction, have given new urgency to the search for frameworks that go beyond anthropocentrism to more encompassing and broader perspectives that take nonhuman lifeforms more fully into account. An important development in this regard has been biosemiotics, which studies signs as they are created and interpreted by nonhumans.  Building on this work, the lecture will discuss how meaning is understood in a biosemiotics framework, drawing on the work of Jesper Hoffmeyer, Terrence Deacon, and Wendy Wheeler among others. Moreover, the talk will challenge the view, common among biosemioticians, that computers cannot create meaning, showing that their objections suffer from biologism, the faulty extension of criteria proper to biologic living creatures to computational entities. Working from the differences in embodiment between computers and biological beings, the talk will set forth criteria for the generation, communication, and dissemination of meanings among computational media. Cognition, it will be argued, is common to humans, nonhumans, and computers. Cognitive practices in this broad understanding give rise to emergent meanings within and among all entities capable of cognition, resulting in a cognisphere of planetary dimensions.

Prof. Lev Manovich

City University of New York

Lev Manovich is one of the leading theorists of digital culture worldwide and a pioneer in the application of data science for analysis of contemporary culture. He is the author and editor of 15 books including Cultural Analytics (2020), AI Aesthetics (2019), Software Takes Command (2013) and The Language of New Media (2001). Lev Manovich is a Presidential Professor at The Graduate Center, City University of New York, and a Director of the Cultural Analytics Lab. The lab created projects for the Museum of Modern Art (NYC), New York Public Library, Google, and other clients.

 

How to Predict Culture in 2050?

Abstract

Science fiction movies and novels, NGOs, think tanks, futurists, and many others make predictions about the future. But these predictions usually only concern social and economic aspects of human life – technology, space travel, the impact of climate change, countries’ economies, population growth, ad so on. We are never told what kind of culture we may have decades from now. Fashion, literature, cinema, literature, social media, visual art, theatre, performance, and other cultural forms are absent from these predictions.

How can we use qualitative humanities theories and computational methods to explore possible scenarios for future culture? Can the work in digital humanities and cultural analytics be extended to look into the future, as opposed to only analyzing cultural data from the past? What are the relevant questions to ask and the dimensions to consider? In my lecture, I will explore these questions.

Prof. Jussi Parikka

University of Southampton

Jussi Parikka is Professor in Technological Culture & Aesthetics at the Winchester School of Art (University of Southampton) and Visiting Professor at FAMU, Prague, where he leads the project Operational Images and Visual Culture (2019–2023). Parikka is the author of various books on media archaeology, digital culture and technical media, including Digital Contagions: A Media Archaeology of Computer Viruses (2007, 2nd. ed 2016), Insect Media: An Archaeology of Animals and Technology (2010), What is Media Archaeology (2012), and A Geology of Media (2015). In addition, he has edited and co-edited such publications as The Spam Book: On Porn, Viruses and Other Anomalous Objects from the Dark Side of Digital Culture (2009), Media Archaeology: Approaches, Applications and Implications (2011) and Writing and Unwriting (Media) Art History: Erkki Kurenniemi in 2048 (2015). His current projects focus on operational images, environmental humanities, as well as the genealogy and current uses of “laboratories” in (digital) humanities, design and media (forthcoming as the co-authored The Lab Book in 2021).

 

Invisibility and Invisuality: Images in Digital Arts/Culture

Abstract

To visualize, to make visible, and to make tangible persist as central parts of both practice and theory of digital culture. A multitude of different approaches gravitate around the idea that making the invisible visible is a key part of the task of (critical) arts and humanities and thus, also where our methodological investment should be in order to understand either realities that are not directly available to experience or power (structures) that remain(s) hidden in other ways (commercial, legal, etc.). In Wendy Hui Kyong Chun’s terms, the visibly invisible nature of digital culture from interfaces to programming to data persists as our key problem. Transparency and information opacity (Zach Blas) is another pair of terms that articulates related issues. But what forms of conceptualization and practice ensure that visible/invisible are not assumed to be simple oppositional terms but that operate in complex ways in aesthetic and epistemic contexts?

This talk will address invisibility and invisuality (MacKenzie and Munster) as two key terms and tropes that operate in relation to visual culture and technical images. Drawing on current work as part of the Operational Images and Visual Culture project (located at FAMU, Prague) I want to address the notion of the operational image as it pertains to the theme of the invisual/invisible in different ways. While artistic work (Paglen, Steyerl et al.) has recognized some core aspects of themes of invisibility in terms of digital art, aesthetics, as well as techniques related to machine vision and AI even, I want to investigate some further implications of this terminology and its underlying benefits for addressing questions of images in context of contemporary data culture (see also Dvorak 2021).

In this context, I will also draw on work such as Rosa Menkman’s Whiteout (2020) video that can be viewed online (https://beyondresolution.info/Whiteout) before the talk.