The Job Role That Even ChatGPT Can’t Touch
In a world where AI-generative models like ChatGPT are becoming increasingly prevalent, it can be easy to imagine a future where robots take over all our jobs. But fear not, for there is one profession that has proven to be a true champion in the field of AI—UX Design. This role has adapted and evolved with the technology, and continues to play a vital role in the development and advancement of AI-generative models.
Tug of war
Google’s Francoise Chollet leads an interesting discussion on what ChatGPT-like AI assistance model offers to the world.
Like many in the field of deep learning, Chollet believes that achieving true autonomy in AI is an incredibly difficult task, and present techniques fall far short of that goal. Additionally, current approaches take a detour from the direction of autonomy. Regardless, Chollet acknowledges that AI assistance can still bring significant value to society, only that the challenge will lie more in designing effective human–AI interactions rather than advancing the technology itself.
Simply put, he predicts that the role of “AI UX designer” will become increasingly important in the future as the task will be to find novel and efficient ways for humans to interact with data manifolds, or what is more commonly understood as deep learning models. Further, natural language dialog will only be just one modality out of many, considering research institutes like Meta and Open AI are already working in the direction of building multimodal neural networks. As Chollet aptly puts, “Data manifolds will become a tech commodity”.
Semianalysis’ Dylan Patel also shares a similar viewpoint in the following tweet:
Download our Mobile App
For Patel, hyperscalers like Google, Microsoft, Amazon, and Oracle, among many others, have lost sight of the end goal. This can be validated by the fact that the rise of UX in AI—owing to its commodification—is already a part of the research focus in several organisations.
“UX research is essential for a large language model. It is still a nascent stage. Currently humans are adopting UX but we should create a system where UX will adopt humans. That will be the ultimate stage of UX for large language model,” Soumen Ray, Head of Analytics at Hindustan Coca-Cola Beverages, told AIM.
UX x LaMDA
The incident with Google’s LaMDA model, where the public became concerned over the model being referred to as “sentient” and capable of feeling emotions, accurately captures the fear that humans have of AI becoming more advanced than them. Interestingly, here as well UX research can play an influential role in building the future of LaMDA.
“UX research has many of the answers on how to train these engines, not only in how LaMDA can analyse user questions, but also how it can problem-solve with the user with follow-up inquiries,” writes Anil Tilbe, who currently leads AI + Product for the U.S. Government.
Tilbe essentially argues that the UX research will help understand how to interview and understand users. An example he provides is that if the model is not able to understand a question posed by the user, or wants the user to reframe their questions for not being able to understand earlier. In such a case, it would be able to follow up with the user if it trained on how to ask questions.
Tilbe proposes that LaMDA should be educated on how to socialise in terms of employing design thinking to understand the varying user needs by conducting interviews. We also see a significant difference when it comes to the user research performed by a model like LaMDA as opposed to say, ChatGPT.
In the above example, we can see that while LaMDA tries to connect at a more human level by using elements of human interaction in its output generation, ChatGPT seems to be just rendering output scraping content from the web, without understanding what constitutes genuine human interaction.
Deep learning researchers have been disillusioning people of the groundbreaking capability of models like ChatGPT, which is touted as “disruptive and revolutionary” by many. According to them, while large language models (LLMs) have been successful in providing various AI-assisted applications, it comes at the cost of Artificial General Intelligence (AGI). Researchers argue that the current state of LLMs is not advanced enough to support AGI.
While the deviation from AGI seems treacherous to many within the community, it almost seems like a necessity considering that it is human nature to be cautious of any new technology that potentially threatens their special place on the planet. User research will ensure that the AI is designed for the people that will use it, instead of just for the technology itself.