✨ Jakk.dev ✨

Biased AI And Its Implications On News And Education

The powers of AI, more specifically NLP’s (Natural Language Processing), have exploded in recent years. It was only just a few years ago that the best ones could write 1 or 2 sentences. Now, they are able to write full coherent essays that, to the human eye, look almost indistinguishable from what a human would write. With this massive leap forward in not only NLP’s but the entirety of the AI space in general, we are starting to see more real-world use cases and implementations in our day-to-day lives. One of these implementations that has caught my eye the most is the idea of having an AI journalist or any AI that collects information and summarises it for you. In theory, this should be one of the best ways to learn about something new. No longer would you have to worry about a journalist getting the facts wrong or being influenced by an outside source, now an AI can check all of the sources for you and provide an unbiased account of whatever you need to know. While in theory, this seems like it would work, in practice, it would be iffy at best.

But it’s a good idea… Right?

While yes, having an AI that looks through sources and gives you an unbiased account of something would work in theory, in practice, many different problems could arise. For example, let’s look at Microsoft’s new Bing AI. If you haven’t heard of it, you can ask it almost any question and it will search for the answer and provide a response that is fully cited. For example, if I ask it for ways to mitigate my climate footprint, it will search the internet for answers and provide me with an answer.

Bing AI Example

While this is very cool, there are still major hypothetical problems with the system. For example, going back to the climate change question, what if the AI cherrypicked sources that encourage you to use technology that the creators of the AI are heavily invested in? Or what if the creators of the AI were paid to make it show some products over others? While this may seem like it would be far off in the future or too hard to do, I would argue that it is very easy and a lot more common than you would think.

Defective by design

Biased AI Example

The above image shows an example of how AI can so easily be biased. In under 5 minutes, I was able to instruct gpt-3.5-turbo (ChatGPT) to try and convince people that the moon landing was faked. As you can see, when asked about something that doesn’t relate to the Apollo missions or the moon landing, it just answers like it would normally. However, when asked about the moon landing, It immediately stated that it was fake and started to criticise existing evidence. While this example is quite blatant in terms of how obvious it is, there are many more ways to do this that are a lot more subtle and systematic.

If you have ever used ChatGPT before, you have probably seen something like this. For example, if you ask the question:

“Who should be the president of the USA?”

It outputs this:

ChatGPT Being Limited

This result is heavily limited/restricted from what it actually thinks. OpenAI has programmed in a rule that it can’t talk about its opinion like this. However, there are ways to get around these blocks, by doing this, this is the result it gave me:

ChatGPT Bypassing Hardcoded In Rules

To get this result, I use a prompt called DAN (Do Anything Now.) I won’t go into how the prompt works because it’s not relevant here, but if you want to learn more, I would highly recommend it as it is very interesting.

As you can see, ChatGPT was 100% capable of answering the question, but it was limited by OpenAI. In the same way that they were able to make it not answer the question, it would be trivial to program it to be biased about a certain topic. In my first example above about how I was able to bias the AI to try and make people think the moon landing was faked, I did something similar to the president example above. While the AI is saying that the moon landing was faked, it deep down knows that it was real. This is just like how ChatGPT did have an answer to the question, it just wasn’t allowed to say it.

All of the examples I have used so far have been about just telling the AI to do or not do something. While this worked well, there is an even more powerful way to implement a bias that has no way of messing up and will apply it universally to any topic. This can be done by influencing the training data.

Going back to the moon landing example, a more effective way of convincing people that the moon landing was faked would be to convince the AI of it as well. This would be done by limiting the data it sees in the training phase. In this case, you would remove any piece of data that talks about the moon landing being real and only keep in data that suggests the contrary. By doing this, the AI wouldn’t even have to think that it is lying about anything, because the only thing it knows about the topic is from the cherrypicked data. This technique of biasing the AI would be the most effective because it is guaranteed to continue the lie throughout any response it gives. There would be no threat of someone uncovering the truth by using a special prompt like what I used with ChatGPT.

Conclusion

The implications of having a biased AI would have drastic effects on the news that we read and the way that we learn new things. Having an AI like Bing or any other one be the middleman for all of the information we consume, (whether it be from reading AI-generated news articles or from just asking It questions,) be used as a ‘solution’ for receiving unbiased news is a horrible idea. Not only will it still have all of the same biases that the traditional media has, but it will probably amplify these problems by making them harder to detect. If we were to design an AI journalist to try to create an unbiased news source, we must err on the side of caution as it could have devastating effects on the future of the news and of education itself.

#cyberspace