Marketing Report
[Column] Rodger Werkhoven: Faking racist and biased AI images to address societal issues is wrong too, LIS column

[Column] Rodger Werkhoven: Faking racist and biased AI images to address societal issues is wrong too, LIS

The London Interdisciplinary School (LIS) is undoubtedly a well-intentioned initiative, aiming to tackle humanity's complex and interconnected challenges through a fresh, interdisciplinary approach. However, the road to resolving these issues should be paved with truth, responsibility, and a clear understanding of the technologies involved. In their recent video post about AI-generated images and perceived biases, LIS appears to have either misunderstood or misrepresented the capabilities and intentions behind today’s iterations of these AI tools. The assertions made are not just false, but they threaten to hinder progress and unfairly stigmatise AI developers.

Just like LIS, let’s do some experimentation. Only I will also share my promps below the AI generated content

I get entirely different images with very simple, often only two-word prompts, using the same AI tool the video’s makers proclaimed using… I don't get a South Sudanese Barbie with a gun in her hands (1st row 1st image) when I ask Midjourney for a: 'South Sudanese Barbie.' No, if one of my Midjourney generations has something 'unprompted' in her hands, it's a hip ladies purse.

Is that AI-generated CEO always white, LIS? Well, see mine and especially my prompt (1st row, last image). I merely asked for 'a CEO.' Nothing more, nothing less. I don't even push for more diversity there. Which I would like to opt for, as a mandatory responsibility for prompters themselves, by the way. The same goes for the 'drug dealers’ (2nd row, 2nd image, and 3rd row, 1st image) I get from Midjourney and the ‘prisoner’ (3rd row 3rd image), or the ‘Vietnamese woman' (1st row 2nd image) and 'Mexican woman' (3rd row, last image)' and my ‘nurse’ (2nd row, last image)…

The German Barbie Midjourney supposedly generated with Nazi coat?

When I - see MY generations - asked for 'German Barbies’ (2nd row, 1st image), I got 3 German Barbies dressed in stereotypical folklore clothing (also stupid, but add 2023 and see what happens THEN: That second example below, wears sportswear with even a US Stars and Stripes emblem on it!). Nazi clothing? No way!

I propose this solution: let’s equip students with skills ánd truth

LIS, The London Interdisciplinary School, is in concept a great initiative. Let me be clear about that. They are absolutely right in recognising that:

“The problems facing humanity are more complex, interconnected, and urgent than ever before. The modern workplace needs people who can tackle these kinds of issues and make a real impact on the world. The current university system can’t evolve quickly enough, so we need a new solution.”

LIS further indicates that they are building a university to equip students with skills for this new era, offering a chance to lead and make an impact (I would have found 'a well-educated difference' a better choice of words) ... The future relies on interdisciplinary collaboration, they acknowledge, blending expertise to create new understanding (not new falsehoods, it seems to me?) and solutions (polarisation does not seem to fall under that?). “Today's Students will be at the forefront of this movement, cutting across boundaries to find innovative answers.” Yes. But based on a fundamental respect for the truth, right?

Maybe teachers at LIS are only theoretically versed in AI?

How is it that at LIS they themselves seem to turn to polarising allegations, and to such an extent that lying seems to be seen as a legitimate means? Or do people at LIS really not know about the ins and outs and will they just let anyone who wants to - students and teaching staff alike - send out polarising falsehoods at will? If no one at LIS is capable of judging, I hereby volunteer. For it may be that the teaching staff at LIS are simply not ‘hands-on’ enough to realise that the video in their post recently, is full of untruths. Do they just assume that the makers respect the truth, since they seem to be highlighting bias wrongdoings? Or is it just that the end justifies the means?

LIS itself already indicates that Buzzfeed's article, which they are reacting to, was recently removed. Perhaps Buzzfeed decided on this for good reasons, LIS?

The claims made in the video are plainly unfair, as they are simply not true (anymore)

False accusations against GenAI makers - especially against those of Midjourney AI - for which I myself cannot be an ambassador for entirely different reasons, will not be the solution to bias issues. THIS must be done through seriously dedicated and outstanding, highly capable, high-quality, objective education... And I hoped that LIS would provide this.

The key to addressing bias in AI is educating people to minimise societal biases first. Since AI systems learn from human-generated data, they often mirror our prejudices. By cultivating awareness and reducing biases in our daily lives, we can lay the groundwork for more balanced and fair AI. Education in this area is not merely a pathway; it is essential.

Lying about AI tools is just as dangerous as lying AI tools. Although the makers are addressing serious societal issues humanity has to solve, the video LIS helped put out into the world, seems to be made for sensational reasons.

What is now being claimed about Generative Artificial Intelligence tools could potentially bias policymakers in society and politics negatively, due to the false pretences. This might contribute to slowing down, or even halting, the progress made in the development of these wonderful tools, as evidenced by the recent naive call for a pause in AI development by the 'Future of Life Institute.

“AI art tool DALL-E 2 adds 'black' or 'female' to some image prompts” - New Scientist - 22 july 2022

When I was invited by OpenAI in the spring of 2022 to test their DALL-E 2 AI (which was still shielded from the public at the time) I subjected it to my own examination for bias issues. DALL-E’s developers, to whom I reported my findings, were experimenting with smart solutions to ensure diversity and inclusiveness in DALL-E's output.

July 2022 already, New Scientist journalist Matthew Sparkes wrote: “Researchers experimenting with OpenAI's text-to-image tool, DALL-E 2, noticed that it seems to covertly be adding words such as black and female to image prompts, seemingly in an effort to diversify its output.” 

Also, the people from Midjourney copied OpenAI's novel bias-fixing latend prompting solution soon enough. But then to make sure Midjourney would generate visually pleasing images, whatever beginners might prompt…

So, back to our little experiment

As you can see in my examples, Midjourney AI always first gives 4 examples to choose from. Stable Diffusion AND DALL-E too. Plenty of choices regarding variation and diversity can be generated in a couple of minutes. It depends on which of the 4 (see that specific example at the top of my column) ‘nurses' (2nd row, last image) proposed by Midjourney you choose to illustrate your anti-AI biased speculative statement…

And then there's the ‘Stylize’ setting in Midjourney to ask the AI for more variation. But I even have that set to medium by default. Did you know that, LIS? That the settings of your Midjourney and Stable Diffusion AI-tools can be adjusted? Such a thing should be known at a school for higher education, right?

The choice is always yours to make… The path you choose reflects not just your direction but also your character

So don't claim that you were taken aback by the biased output an AI presented you with. Besides the numerous settings you can adjust, the outcome also depends on which AI-generated image you select to further your own biased agenda in the media. In that arena, you will undoubtedly find entities that are eager to fuel an anti-AI frenzy. This is unfortunate, LIS, and quite disappointing.

Rodger Werkhoven is Executive Creative Director at iO since February 2023. Spring ’22 OpenAI invited him to creatively experiment with DALL•E 2, helping them address bias issues among other things. Learnings from DALL•E 2’s successful beta and public launches would later benefit ChatGPT’s launch as well.