How do we use and influence AI?

A DALL-E 2 generated image of a person standing in a field, painted in an abstract style

Image Credit: A DALL-E 2 generated image of a person standing in a field, painted in an abstract style

In ChatGPT and the Imagenet Moment Benedict Evans writes:

… a generative ML system could make lots more ‘disco’ music, and it could make punk if you described it specifically enough (again, prompt engineering), but it wouldn’t know it was time for a change and it wouldn’t know that punk would express that need. When can you ask for ‘something raw, fresh and angry that’s a radical change from prog rock?’ And when can a system know people might want that? There is some originality in creating new stuff that looks like the patterns we already have, but the originality that matters is in breaking the pattern. Can you score that?

As AI gets better and better at repeating patterns that exist. As we get better at producing prompts – what Evans calls ‘prompt engineering’. As AI starts getting integrated in tools in ways we don’t even notice. As all that happens, we and how do we identify the places we need human intervention? And not just to retain unique value or to increase the amount of interesting hard work we do, but so that we are not building a future based on our past. Because, you know what? That past is full of exploitation, extraction, and oppression. Many would argue the very AI works is an example. It’s intelligence is built off the work of others in a mass consumption way.

So, what does this mean for civil society? How do we take advantage of the technology, influence the system, and judge the output?

  1. Algorithm detectives. We need an independent body who reviews the tools – algorithms, the training of AI – so that we can be clear about the patterns we are learning from and the ways we promote the responses.
  2. Use AI to identify patterns of injustice. This will require excellent prompt engineers who are asking and asking and asking and then sharing the results.
  3. Illuminate what is missing. If AI is trained on available massive data, we need to show what and who is missing from that data. And we have to find ways to include it. Sometimes that will come from technical means – refining, adding, training – and sometimes it will come from advocacy. It must be intentional.
  4. Use the tools. We can’t just opt out. Usage shapes the tools. We have to use them and aggregate our learnings with the goal of improving our own efficiency and shaping the tools themselves. And let’s normalize it. Don’t hide that the you turned the grant question into an AI prompt.
  5. Build context. That’s what struck me about the quote above: context is something’s humans have in a hyper local way. We can adjust, disregard, slow down or speed up what AI generates for us based on the context. We can identify places where the pattern is wrong for reasons of justice and equity. We have the context and the experience.

#areas #AI

Discuss...