Future Minds – Critique of Artificial Intelligences
for more detailed information, see exmediawiki-seminarpage »here«
Thursday weekly 11 am – 1 pm
Introduction to the programming of artificial intelligence (basic seminar)
Far-reaching cultural consequences of AI do not only become apparent when using upload filters for the algorithmic censorship of unwanted text and image content or when auctioning AI paintings at Christie’s; not even in the formulation of ethical guidelines for dealing with AI or the increased emergence of AI powered hate speech bots. They begin, very abstractly and mostly unnoticed in their programming, in semi-public, very formal discourse fields.
This is exactly where we start experimentally. The seminar will introduce the subsymbolic AI of neural networks and their programming in a very elementary way. Coding from scratch, discussing the code together and learning to understand it, in order to learn to assess the possibilities, limits and dangers of this technology for yourself is the aim of this seminar.
We do not adopt the technology of artificial intelligence as a tool in the Homo Faberian sense, but combine programming as an artistic practice with the critical analysis of its social effects, which can be developed in more depth in the parallel theoretical seminar „Future Minds – Critique of Artificial Intelligences“.
Thursday weekly 2 pm – 4 pm
Future Minds – Critique of Artificial Intelligences (theory seminar)
Starting with a look back at the roots of Artificial Intelligence (AI) in cybernetics, a number of topics and terms are presented that require closer examination if one wants to position oneself as an artist or theorist. These are by no means exclusively terms that can be assigned to the cybernetic tradition, such as artificial neuron, black box or machine learning. Even fashionable terms such as open culture, transparency or technological singularity must be renegotiated against the background of the foreseeable cultural, political and social consequences of AI technologies.
For example, what is the control problem of automated decision-making processes? Where are technical and where are ethical problems when we delegate decisions to machines? Can you program ethics? Can AI’s justify their decisions? AI is used to control a wide variety of processes, with the transition between control and prediction being fluid. But predictions are subject to fundamental limits, which also give indications of the limits of AIs.
In addition to the basic course “Introduction to the Programming of Artificial Intelligences”, which takes place in parallel, a condensed insight into machine learning and the working methods of artificial neural networks, which have triggered the current discourse on AI, is given. On the basis of selected artistic works it is shown how the arts are aesthetically and critically examine these technologies.
Day’s & Topics
Poetic coding | Machine poetry | How to code Autopoetry Apparatus | Machine reading | Machine seeing | Machine learning | AI on Rasp Pi | How to code Stochastics Walks on the Latent Space | How to Dual Use? | Generative Adversarial Networks | How to code Positive Extraction and Reproduction System | DNA phenotyping | Convolutional Neural Networks | How to code Office Plants
History of AI| Machine Learning + Explainable AI | The relationship of digital platform capitalism to the singularity ideology | AI as a pacesetter for technocratic retrotopias | Computational Creativity | Prediction + Decision making | Where does the war begin? | Biases + digital ethics | Singularity, Strong AI & Functionalism
table of contents
all examples working with MNIST handwritten digit database
Dense Neural Net
- Coding Artificial Neural Nets (DNN) in Tensorflow & Keras:
- Simple Autoencoder in Tensorflow & Keras
Convolutional Neural Nets
- CNN in Tensorflow & Keras
Generative Adversarial Networks
- GAN in Tensorflow & Keras
- Visualizing Activations (based on model from CNN-in-Keras.ipynb)
- Visualizing Heatmaps (based on model from CNN-in-Keras.ipynb)
- LIME for image classification by using Keras (InceptionV3)
- for english Textcorpora
- for german Textcorpora
- Basic Encodings & traditional embeddings (ONE-HOT / BOW / TF-IDF)
- Basic Chatbots (TF-IDF)
- Chatbots with Chatterbot
- Sentiment Analysis for german textcorpora
- train a Word2vev Modell on your own Texts
- LSTM– Textgeneration
- Loading and scraping data from the web
- und dann noch das von letztens…
├── Computational Creativity
│ ├── How could a copycat ever be creative?, Douglas Hofstadter, 1993
│ ├── Harold Cohen and AARON, Paul Cohen, 2016
│ ├── Computer Models of Creativity, Margaret A. Boden, 2009
│ ├── Creativity and artificial intelligence, Margaret A. Boden, 1998
│ └── Kreativität und Künstliche Intelligenz. Einige Bemerkungen zu einer Kritik algorithmischer Rationalität, Dieter Mersch, 2019
├── Digital Ethics & Biases
│ ├── Ethik und algorithmische Prozesse zur Entscheidungsfindung oder -vorbereitung, Lorena Jaume-Palasí and Matthias Spielkamp, 2018
│ ├── AUS POLITIK UND ZEITGESCHICHTE – Künstliche Intelligenz, Bundeszentrale Politische Bildung, 2018
│ ├── Machine Bias Artiﬁcial Intelligence and Discrimination, Can Yavuz, 2019
│ ├── Excavating AI – The Politics of Images in Machine Learning Training Sets, Kate Crawford and Trevor Paglen, 2021
│ ├── Maschinenethik – Normative Grenzen autonomer Systeme, edited by Matthias Karmasin, 2019
│ └── A Survey on Bias and Fairness in Machine Learning, Ninareh Mehrabi et. al, 2019
├── Functionalism & Consciousness
│ ├── Troubles with Functionalis, Ned Block, 1978
│ ├── What Is Functionalism?, Ned Block, 1996
│ ├── Facing Up to the Problem of Consciousness, David J. Chalmers, 1995
│ ├── A Naturalistic Approach to the Hard
Problem of Consciousness, Wolf Singer, 2019
│ ├── Functionalism, Stanford Encyclopedia of Philosophy
│ ├── Explaining Consciousness – The ‚Hard Problem‘, dited by Jonathan Shear, 1995
│ └── the puzzle of conscious experience, David J- Chalmers, 1995
├── History of AI
│ ├── A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, John Mac Carthy et. al, 1955
│ ├── Computational Linguistics, Joseph Weizenbaum, 1966
│ ├── A symbolic analysis of relay and switching circuits, Claude E. Shannon, 1938
│ ├── Computing Machinery and Intelligence, A. M. Turing, 1950
│ └── A logical calculus of the ideas immanent in nervous activity, Warren McCulloch and Walter H. Pitts, 1943
├── Machine Learning & XAI
│ ├── Machine Learners – Archaeology of a Data Practice, Adrian Mackenzie, 2027
│ ├── Advanced Lectures on Machine Learning, Olivier Bousquet, 2004
│ ├── Explainable Artiﬁcial Intelligence: A Survey, Filip Karlo Došilović, 2018
│ ├── Explaining and Harnessing Adversarial Examples, Ian Goodfellow, 2014
│ ├── Towards A Rigorous Science of Interpretable Machine Learning, Finale Doshi-Velez and Been Kim, 2017
│ ├── The Mythos of Model Interpretability
Zachary C. Lipton, 2017
│ ├── One Pixel Attack for Fooling Deep Neural Networks, Jiawei Su et. al, 2019
│ ├── A Survey Of Methods For Explaining Black Box Models, Riccardo Guidott et. al, 2018
│ ├── Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI), Amina Adadi and Mohammed Berrada, 2018
│ ├── Explainable Artificial Intelligence Research at DARPA, Slides by David Gunning, 2019
│ └── Explainable Artiﬁcial Intelligence (XAI), Alexandre Duval, 2019
├── Prediction & Descision Making
│ ├── Prediction Machines – The Simple Economics of Artificial Intelligence, Ajay Agrawal et. al, 2028
│ ├── Risk and the War on Terror, edited by Louise Amoore and Marieke de Goede, 2008
│ ├── Algorithmic Risk Assessment in the Hands of Humans, Megan T. Stevenson and Jennifer L. Doleac, 2019
│ ├── Security, Technologies of Risk, and the Political: Guest Editors‘ Introduction, Claudia Aradau et. al, 2008
│ ├── The politics of possibility – Risk and Security Beyond Probability, Louise Amoore, 2013
│ └── Wild Cards. Imagination als Katastrophenprävention, Jutta Weber, 2014
├── Secondary literature
│ ├──TIME Magazine – special edition: Artificial Intelligence – The Future of Humankind, 2017
│ ├── A Stroll Through the Worlds of Animals and Men, Jakob von Uexküll, 1934
│ ├── Künstliche Intelligenz – Ein moderner Ansatz, Stuart Russell and Peter Norvig, 2012
│ ├── SUPERINTELLIGENCE – Paths, Dangers, Strategies, Nick Bostrom, 2014
│ ├── Streifzüge durch die Umwelten von Tieren und Menschen, Jakob von Uexküll and Georg Kriszat, 1956
│ ├── The Human Use of Human Beings – Cybernetics and Society, Norbert Wiener, 1950
│ └── Sapiens – A Brief History of Humankind, Yuval Noah Harari, 2014
└── Singularity & Strong AI
│ ├── The Singularity: A Reply, David J. Chalmers, 2012
│ ├── The Singularity: A Philosophical Analysis, David J. Chalmers, 2010
│ ├── The Coming Technological Singularity: How to Survive in the Post-Human Era, Vernor Vinge, 1993
| └── What to Do with the Singularity Paradox?, Roman Yampolskiy, 2013
The relationship of digital platform capitalism to the singularity ideology
with Dr. Thomas Wagner (cultural sociologist, publicist)
In the environment of corporations like Google, Facebook and Co., an ideology of technological feasibility thrives. Its followers propagate the fusion of man and machine, speculate about artificial superintelligence and dream of immortality in the cloud. All social problems could be solved en passant. Fantastic visions, crazy ideas. But more than pipe dreams: their propagandists finance start-ups, advise governments, run the laboratories of high-tech companies and disseminate their ideas at their own universities.
Thomas Wagner aptly described this ideology, the merging of robotics, capitalism, militarism, technodeterminism, technocracy, militarism and instrumental reason as »robocracy« and explains that as a result the rule of the current elites is cemented. Ultimately, there is the question of the democratic use of technology.
As with any form of technology, it is important for critical approaches to realistically assess the possibilities, impossibilities, opportunities and risks of artificial intelligence, to relate them to capitalism and domination and to discuss which technologies have emancipatory potential and which have purely destructive forces in order to then make demands for democratic control and shaping of technical progress. The world of robotics today is primarily shaped by ideologies and utopias. It lacks realism and a human-oriented concept of democratic socialist computer technology.
AI as a pacesetter for technocratic retrotopias
with the editorial collective capulcu
Technology is not a tool, it is a means to an end – it has never been neutral! It is convenient but ahistorical to reduce technology to a simple tool and to shift the political substance to the „external“ conditions of its „application“. This means that the dynamics of technological development as an instrument of domination cannot be grasped. We therefore deliberately speak of a technological attack in order to understand the ideological content beyond the tool character.
We discussed a broader view of technology that does justice to the social impact of our adversary – technocracy.
Technocracy is giving us an accelerated backward movement. AI-based digital permanent assistance in connectivity of everyone with everything builds on tendencies of behaviorism that were thought to have been overcome long ago, i.e. the view that only the guiding assistance of experts (today in the form of self-learning algorithms) enables us to do so in an increasingly complex world to make rational decisions for our individual lives. With this kind of „friendly“ paternalism, we are currently (not only in China) experiencing a remarkable convergence of economic and state control interests. China is already exporting its social credit systems for behavioral economics to over 30 countries.
How can we succeed in leaving behind the regressive, technocratic retrotopias and shaping social change in a truly progressive sense?
Where does the war begin?
with Christoph Marischka from IMI Militarization Information Center
In Paul Virilio’s notion of „pure war“ the military disappeared into technology as early as the beginning of the last century. “The pure war” broke away from the traditional institutions of the military. In Germany and the EU, too, public research funding has been concentrating on militarily relevant technologies such as pattern and situation recognition using artificial intelligence for more than 15 years, among other things under the heading of „security research“.
The current tendency to network everything with everything, to collect all imaginable data and to calculate the future from it, corresponds to the long-cherished vision of „networked warfare“ on a „transparent battlefield“ – and is immediately implemented in current military strategies.
Against this background, the tranquil university town of Tübingen is to become a top location for the development of artificial intelligence in Europe, based on the model of Silicon Valley. The future is to be actively shaped here – but a closer look at partners, sponsors and donors makes it clear that research should primarily serve their interests. Amazon, the automotive industry and, to some extent, the armaments industry are also involved. Therefore, there was resistance to the research project, which led to an intensive public discussion about AI in the city. In his lecture, Christoph Marischka presented this Tübingen resistance movement and puts it up for discussion. He asked, almost from their midst, for ways of jointly formulating cultural,
DNA phenotyping Workshop
DNA phenotyping is the application of methods that can be used to draw conclusions about the external appearance of an individual based on DNA material. This so-called „extended DNA analysis“ is still prohibited in Germany, but a corresponding change in the law for its future use in criminal proceedings is being planned. „Genetic phantom images“ could soon be used to predict characteristics such as the external appearance, age and biogeographical origin of suspects.
In a three-hour workshop, we experimented and discussed the possibilities and limitations of DNA analysis methods and their subsequent machine evaluation together with Matthias Burba.
Matthias Burba studied law and biology; In the course of his professional career he has worked as a legal advisor and head of the legal department as well as head of forensic science at the Hamburg police; as chairman of the subcommittee on law and administration of the conference of interior ministers and as deputy Hamburg data protection officer.
see more info about the workshop at exmediawiki-workshop page »here«