Annotate Bibliography
Lister, M.(2013) The photographic image in Digital Culture.
Driven by technology and algorithms, the nature and meaning of photography has changed. The shift from film to digital algorithmic processing has changed the way images are made and processed into a computational, programmable medium that affects our perception and interpretation of images, such as their authenticity and objectivity. This project raises questions about how we consume images and how we perceive them in the light of new technologies.
In my project, what is lost when an image is transformed into text? I didn’t describe the image in natural language.How many times does an image have to change before its recognisability and objectivity are compromised?
Trevor Paglen,From Apple to Anomaly, 2019
From an apple, the project explores a new relationship between humans and images, a new way of looking at the world of machine-readable images that don’t need to be looked at by humans to be understood. These images are for computers, not people.It uses imagenet, a large training set that stores photos of common objects in our daily lives and labels them artificially. The project was originally designed to help computers recognise an object.But these labels are still subjective. This subjective categorisation process is also learnt by the computer. We are not machines, we can’t be truly objective, and thus we can’t allow computers to learn truly objective descriptions.
In my own projects, I’ve made 100’s of versions where I’ve attached my own logic to the computer and made subtle changes by altering some of the data in the code. This can be seen as a process of labelling the code, because the final vision is something I can probably anticipate. I can’t objectively translate images into text.
Can I try to show what we can’t see and make it easier to understand, and try to reduce the subjectivity of the conversion process?
Anna Ridler,Myriad (Tulips),2018
Having learnt what ImageNet is, I have a better understanding of the intentions of this project.The project is also a dataset, and the authors share the belief that ImageNet will allow computers to learn with the inevitable problems of human decision-making and bias. When material forms are translated into language, something is always lost.
In order to address, or reduce, such decision-making and bias, the author has sought to do so by being directly involved in the process of data collection and selection, rather than relying on pre-existing datasets on the internet that may already contain bias. She personally photographed each image, and the project ensured that the data sources were controlled and transparent.As I understand it, ImageNet’s database is a biased dataset generated by countless different people labelling different images, whereas this project is a biased dataset generated by a single author labelling the same object.
What is lost when images are converted into text? Readability? Legibility? What’s interesting to me is that I’ve made 100 different versions of an image, what does the computer make of those 100 versions, and how do I feel about them?
Chance and Control: Art in the Age of Computers
This project explores the impact of computers on the relationship between art and technology. The title ‘Chance and Control’ references ‘Cybernetic Serendipity’ emphasising the discovery of accidental ‘happy accidents’ using code that can indeed generate random events; pseudo-random number generators can be introduced into programs to produce unexpected elements in the structure of a plan. introduced into a programme to produce unexpected elements in the planned structure. It also allows for the possibility of generating sequences by iteration, and the repetition of the instruction set in the code can be adjusted so that each version is slightly different.
That’s what I did in my 100th version last week, but I didn’t try for serendipity, I continued to tweak the parameters in a predictable way. Maybe I can create controlled but unpredictable serendipity through image manipulation.
John burger,Ways of seeing(1990)
The first chapter of Ways of Seeing focuses on how visual experience precedes speech. John Burger points out that how we see the world is shaped by our knowledge and beliefs. Through vision, we establish our place in the world around us, although this relationship is constantly changing. Berger explores how vision is shaped by culture and history and states that visual experience is an active process of selection and interpretation.
What happens if I try to process the same image multiple times? I think there might be a comparison here: what is the difference between the original image being copied multiple times, and the original image being processed multiple times and losing some of its properties?
Hito Steyerl ,Poor Image
Hito Steyerl explores the concept of ‘poor images’ in ‘In Defence of Poor Images’, which often circulate freely in digital media due to their low resolution and poor quality, and shows that images are not just passively received, but are active in culture through copying, editing and redistribution. I can try to exploit this in my project by exploring the multi-layered meanings of images by breaking them down into their basic elements (such as colour, shape and texture) and then reconstructing these into textual descriptions.
Poor images, though imperfect in quality, reveal new social dynamics and cultural practices through their online dissemination. It is possible to explore how information is retained or altered through images of different resolutions and qualities, and the impact this has on text production.
Statement
Technology has influenced our image processing, as well as our perception of images. Humans are able to recognise and understand visual information based on empirical and contextual cues, and even if an image is partially altered or degraded, we can still use the residual visual cues to infer the content of the image. In contrast, computer vision systems rely on algorithms and data and lack human intuition and empirical judgement.
In subsequent work, I would like to try to blur the images to different degrees, perhaps the colour of the image, the quality of the image, the abstraction of the image, to try to make different variations of the image, to use serendipity to create answers that I could not have predicted, to categorise and order them. What is lost when an image is tampered with? Readability? Recognition? I’ve created 100 different versions of an image, so what will the computer make of those 100 versions, and how will I feel about them?
