Why Generative AI May Not Be All Good

If you have been on the internet at all since 2022 you will almost certainly have heard of generative AI tools and large language models (LLMs) such as ChatGPT. There has been a lot of excitement about what they can do and the impact they will have on society.

I was recently fortunate enough to attend an MIT Digital Technology Conference looking at how AI will accelerate digital transformation. A range of speakers were looking at the capabilities and possibilities of these generative AI tools, but there was a talk from  Daron Acemoglu which I found particularly interesting. It was titled “Can we have pro-worker AI?”, which was a broadly cautionary take on the effect these tools will have on humanity. 

It is worth noting as well as being an economist that Acemoglu has written a range of books for the general public, his most famous being one titled Why Nations Fail. So perhaps he has a natural tendency to see the potential negatives of emerging technologies! However, his talk resonated with me, particularly as it has been so unusual to hear negative opinions around generative AI so I am going to sum up the key points he raised about why AI on its current trajectory may not be heading in a pro-human direction.

1. Excessive automation 

Over time, the increase in digital technology has disproportionately benefited individuals with higher qualifications. Furthermore, the more an employee’s tasks can be automated, the further their decrease in hourly wages. Effectively excessive automation may result in accentuating existing inequalities.

Furthermore, sometimes automation actually does not improve anything for anyone. For example, think to yourself, are you actually happier using self check-outs? Does this make your experience more efficient and independent, or is this just a case of automation reducing costs while also making things worse for both workers and customers. I would say that it is possible some chatbots may have a similar effect.

2. Loss of informational diversity

One of the issues with LLMs is they are unable to produce new information. This could be a massive problem as since ChatGPT was released there has been a 50% reduction in traffic on Stack Overflow. The irony is, ChatGPT was almost certainly trained on all of the content on Stack Overflow!

If humanity becomes entirely reliant on these tools and loses the forums for new ideas, it is possible that it will lead to a loss of informational diversity and stagnation. So while LLMs may be very effective at solving simple existing problems, it may take away our ability to solve new problems that have never appeared before.

3. AI-human misalignment

Sometimes how humans engage with AI can be problematic, as we may over-trust them. For example, LLMs are often made to sound “human” and very friendly. This can lead to humans over-trusting the advice that these AI tools give them. 

There is a lot of further research, which was covered by Julie Shah at the conference, showing that humans interacting with AI need to be trained to know exactly when the AI tools can be trusted. Without this, AI may not as effectively help humans.


4. Control of information

A big potential problem with the use of generative AI is that it has the potential of concentrating information into the hands of very few people. With the way the internet currently operates, there are potential problems with search engines filtering which websites are shown, however, the actual information on those websites is not owned by the search engine. Whereas with LLMs, we can’t actually see the information that is behind what the content the AI tools are producing, and this information is actually controlled by a small number of people.

Therefore, there is potential for powerful actors to take control of a lot of information as LLMs become more prevalent. This could allow disinformation to be spread very easily.

Overall, I personally believe that these AI LLM tools will probably work out as an overall positive for humanity. However, I appreciate the potential issues being highlighted, and I think that there does need to be an emphasis on these tools improving life for all humans and not just for a select few.

Previous
Previous

A Butterfly Data Happy Christmas!

Next
Next

Butterfly Data is a B Corp™