Special Issue - When machines and humans speak the same language: ChatGPT breaking language barriers

Newsletter

When machines just “get” us

You might have experienced moments of frustration when talking to Siri or Google Assistant, and they don't understand your request. Most of these systems are trained to listen for specific phrases. LLMs offer hope in developing systems that can generalize and understand a wide range of requests. ChatGPT is built on top of such a model.

Coding languages were created to communicate with machines that understand binary code. Over time, we developed languages that were more natural language-like (e.g., Python) and even visual block programming (like Scratch). With a language model that makes machines understand us, the need for traditional coding languages might diminish. I mean, why bother memorizing Excel commands when you can just tell GPT4 “sum of all the entries if the number is divisible by 3, in the next column up till 12th row” and have GPT4 spit out “=SUMPRODUCT(C1:C12, (MOD(C1:C12, 3) = 0))”? Yes, it really did that! The same is true for the command line.

Yes, I had to use ChatGPT to write this article. Use is a strong word when it acts more like my PhD supervisor. So, co-authors maybe… 2nd author though. In this little exploration, we'll ponder some fascinating use cases and scenarios for how LLMs could be woven into our current systems. Now, let's dive into a world where machines get us.

Prajwal DSouza,

a personal account

Prajwal DSouza, a doctoral researcher at Tampere University's Gamification Group, has an interest in creating interactive web simulations and games that simplify complex scientific and mathematical concepts. His current work focuses on developing gamified VR applications in bioinformatics and statistics to improve public understanding. Most of his projects can be found on his website including those exploring the potential of AI through reinforcement learning. His work embodies his passion for connecting diverse ideas and fostering learning through innovation.

In the short term, this might be the first application of language models. Natural Language to short coding instructions. But this also raises questions for coding education. Do we need to teach kids coding? Isn’t the goal of coding to teach computer science? And at the end of the day, isn't learning computer science about understanding syntax and structure, not the coding language itself?

ChatGPT (3.5) Prompted with: Explain to me Newton's first law in terms of Game of thrones.
ChatGPT (3.5) Prompted with: Explain to me Newton's first law in terms of Game of thrones.

But in the long term, this could lead to more advanced applications. What happens when we can request a longer code? Not only can language understand our requests, but they can also generate content in other languages, like code, to execute tasks. Imagine this: you want to play a game where birds are launched into magnificent structures infested with relaxed pigs. Sure, you could search for the app store and download it… but what if your OS could just generate that app or game for you? I mean, isn't the app store basically a code repository anyway?

LLMs can “understand” language, but the code/text generation side of it can go even further with personalization. Imagine reading this article, but it's tailored to your interests.

Or picture a quantum mechanics course peppered with references to a TV show, chicken steak, and whatever else appeals to your interests. Dynamic, interactive textbooks might become the next thing.

When Machines Chat Like Colleagues

One of the biggest problems in developing applications is choosing the correct language. A program written in C# cannot be easily translated to Python, or a program written by one developer cannot easily communicate with another, especially when in different languages and frameworks. But what could happen when a machine can seamlessly translate code? Imagine a group of bots having a "Zoom meeting" on the topic "How can we solve the problem of climate change? - A workshop". This sounds like a parliamentary session. What happens when government advisers are a group of LLM agents monitoring the economy, having virtual workshops? In fact, a company in China is reported to have an AI CEO and is doing better than its competitors.

What happens when a group of AI bots understand and correct each other, assign roles, and execute the tasks on their own? AutoGPT is one such tool. I can simply place a general request that I want to write a research article, and it could write one browsing the web and reading various articles. Systems that speak different languages may finally be able to speak to one another easily.

But there's more! What if we could give an LLM a task to read a set of research papers and come up with hypotheses, like when Tony Stark talks to Jarvis in Avengers? Could LLMs help us with research? Well, they're getting there. I recently used ChatGPT (running GPT 3.5, not GPT4) to help with a step of my systematic literature review, and we agreed on 80.4% of the abstracts we filtered through. Performing a full systematic review should be possible with tools similar to AutoGPT that are fine-tuned for literature reviews. With plugin support for ChatGPT, this should be easier and maybe perform much better with GPT4.

Of course, there are limitations to GPT models, especially when it comes to accuracy and reliability. We need to iron out these wrinkles before we can fully embrace the potential of this tech. But for now, I like to think of us humans as the chefs in this AI kitchen, guiding our robot sous-chefs as they whip up tasty treats and execute routine tasks.

Prajwal DSouza
Doctoral Researcher
prajwal.dsouza@tuni.fi
Twitter @prajwalsouza1
linkedin Prajwal DSouza
https://prajwalsouza.github.io/
github prajwalsouza

Robot typing