How Collaborative Design Thinking Can Drive the Best UX for AI

Jessi Sparks

VP of Strategy

As a User Experience professional of nearly 20 years, I have been doing a lot of thinking lately about what is the best conversational UX for AI.

Conversational User Experience is the language UX people created to translate machine speak into human-understandable language. Have you ever gotten an error message that looked like this:

Image showing various cpu system error messages

That is the computer system informing you of why you can’t complete your requested action, based on its language of zeros and ones. As a human, your response is usually irritation and confusion.

Conversational UX was developed decades ago by computer engineers to explain to humans the error the system is returning. This takes the form of dialogue boxes, error pages, or toast messages explaining either the successful or unsuccessful outcomes of the platform running those executable actions. Conversational UX was designed to translate those messages into more friendly human language that helps soften that messaging.

A hand drawn illustration of a website error message.

The Conversational UX specialty has grown over the years with a mission to provide a more “user-friendly” communication style of computer language which is easily translatable from machine language to human-centered language thus creating a better overall experience for the user.

The Beginning of UX for AI

As we move into an era of Artificial Intelligence and generative AI, designers, content creators, and user-experience thinkers need to think about how user experience is being designed and thought of within the framework of AI.

The majority of the heavy lifting as always will be on the talented developers and engineers designing these apps and programs. However, I strongly believe as UX professionals we can help in this area, especially as we dive deeper into mixed reality and virtual reality AI as well.

What does UX look like for Artificial Intelligence and how do we change how we think about machine learning and generative AI?

Like all things in the UX world, it comes down to content and presentation. These two areas are central to how we navigate a world that will unfold and provide us with deeper and more sophisticated search results. It begins with collaborative engineering, UI/UX designers, and content designers/checkers working in lock-step to translate human instructions into AI training modules.

AI Needs to Be Trained By ALL, Not the Few

We need a new army of editors and content authors to become the new content creators for the machine world. This will create authenticity and also avoid discrimination for one group or subset.

Just like the invention of CMS ushered in the new world of marketers and content people to create webpages, mobile app pages, and more, we need engineers to think about how anyone can train AI. We cannot have a system owned and controlled by engineers or a small subset of users. This system has to be democratized to allow for all voices, thoughts, and thinking to be the training mechanism of AI. It also cannot just be built on the back of legacy data because even that will have a bias, implicit or otherwise. This means that we must train the next generation to speak and train AI daily. It also means a diversifying these instructions and teachings, meaning machines must be trained in languages and nuances of both men & women, people of all races, and people of all creeds.

I asked Eric Mann, our Lead Mobile Engineer and resident AI expert on staff, to explain how we can do better in this area. He explained the following:

“Bias is not solely isolated to the creators of the AI model, but also to the data that powers it. To have a truly bias-free system, means opening up the system to the mass market. But that is not enough! The more data collected also means that there might be implicit biases in the data itself. It then falls to the team to ensure that data is free from personal identifiers and focuses solely on the goal of delivering clear, concise data that the masses can enjoy. Just like interacting with users on a person-to-person level, AI must act ethically on our behalf.”

Image of UX for AI and the ethics behind it

AI Needs to have Content & UX People Involved on Day One

In the early evolution of computer programming, engineering made all the decisions and owned the design thinking. It wasn’t until much later that an entire discipline of digital artists and creative thinkers emerged to make the User Experience more intuitive, focusing on User Interface (UI) design and User Experience (UX). This transition builds on what engineering had started, but it also created inherit and legacy issues we still are dealing with today. Those archaic dialogue messages that look like Greek? Those are the by-products of the early days of engineering practices where engineers knew those codes and UX/UI wasn’t there to question or translate those into human-readable phrases.

Web and mobile have done a fine job of making UX/UI a cornerstone of engineering thinking, the modern Design Sprint was developed by Google and is the gold standard of good UX/UI working alongside engineering.

But what happens when the UI/UX is more concealed and it’s about queries instead of images and presentation?

This is still considered User Experience and it will be the next generation of content creators and humans from all walks of life who will be responsible for submitting answers. Eventually, we will move away from textbox screens (chat boxes) and move towards more integrated UI which will be built into all of our devices.

Amazon, AI, and Diversity

Amazon paved the way with Alexa, but not without some obstacles. When you talk to a diverse set of users about the experience, you will hear stories (particularly from women) about how users have to repeat their instructions, modify their voice rhythm, or even have someone else give feedback in order to be understood by the device. This is a by-product of a small subset of data trainers, no fault of any engineer or even of the great minds at Amazon. The New York Times did a story on AI assistance gender bias found by the UN, stating that women especially are very underrepresented in AI data training and AI assistants are hyper feminized by design. 

Allison Gardner, a co-founder of Women Leading in A.I. said it best:

“But these mistakes happen because you do not have the diverse teams and the diversity of thought and innovation to spot the obvious problems in place.” 

This reminds me of the time before the invention of the Gutenberg Press, when a small group of trained scribes funded by an even smaller group of patrons, created manuscripts for the world. Because of the limitations, the voices and tones of those manuscripts were limited in topics, in thoughts, and in creed meaning they lacked general accessibility.

But when Gutenberg created the printing press, everything changed and suddenly an explosion of ideas and processes began to emerge. The creation of a mass, movable type printing machine opened up book authoring to many voices and it helped democratize writing in less than a century.

Ironically, when I was writing this very article, a use case for Quality Assurance for verification of data integrity by humans was proven false. Bryan Thomas, one of our QA Testers, was critiquing this blog post and asked me for a better source for a stat I saw used everywhere on the internet. This stat had over 20 Million Google entries and been populated all over the internet in thousands of marketing publications, university references, and respected marketing blogs such as Hubspot. It quotes:

 “…Research at 3M Corporation concluded that we process visuals 60,000 times faster than text. 

However this text is actually made up and has no citation for the research behind it. One blogger has actually offered a cash prize for anyone to provide the actual research study which is still unclaimed. It was a human blogger who fact-checked the stat and revealed it is bogus. AI would have seen Google research for over 20 Million results with citations, references, and automatically perpetuated this falsehood. This is the very reason why we need humans checking data sources as Eric referenced above as machines look for quantity and can be fooled if the right checks and balances aren’t put into place to also check for quality.

Machine Learning from human UX for AI image

How can we have good UX/UI for AI?

  1. Hire traditional book writers and editors to formally transition to be AI training creators. Open up their unique talents to write and create some of the content to feed AI. If we have the best writers and editors already prepared to train AI in the present day, as opposed to depending on information from the past, then we can create more, better informed results. By creating systems that will allow these writers/editors to add and modify checks/balances, much like Wikipedia entries, will provide yet another available avenue to help train good habits into AI. Conversational UX for AI can’t be taught solely by data feeding – but through the act of humans working alongside AI, we can help lead AI towards a quicker, more accurate response.

Eric, continued to explain it this way…

“This isn’t just a post-process production either. Making sure the data is cleaned and processed before the AI takes a swing at it ensures a quality product is more easily generated after the fact. Data for AI needs to be thought of ethically, just as much as any other data collected, be it for analytics, insights, or simply reporting.”

  1. Begin with UI/UX designers involved in the engineering process from day one, even if it’s simply to provide another perspective. This should be done even if the initial steps in the project aren’t related to design, too. Breakthroughs come from diverse thoughts, even if the beta is just loading data into a database. We see parallels between ChatGPT today, and the early websites of the late 90s. Like those early websites, ChatGPT is an amazing tool, but the UI/UX of this will evolve and get more sophisticated. We need content designers and visually strong people involved in the process from the start. 
  1. Invest in Quality Assurance teams for AI to act as not just technical bug checkers, but also as thoughtful testers who are able push the technology’s training while looking for the dark side or holes of AI training. These testers can be trained to run more than just technical checks, including content checks as well. Even the most ethical and awesome engineer may indirectly create something unethical without checks and balances. Release these testing mavins and sleuths onto AI to help be a safeguard and check on the technology. Humans correcting machines early and often is the best way to help maintain and keep a positive narrative and results.

Magnetic is actively diving into the AI application market. In fact, Eric Mann, has a Masters in Machine Learning from the Georgia Institute of Technology. If you are looking for creative thinkers to help you navigate your next AI application, reach out. We are emerging technology professionals with a track record of success, with over 12 years of experience in AR/VR and mobile & website development, while working for billion-dollar retailers like 7-Eleven and La-Z-Boy.