This is the first of a series of articles about the “explosion” of artificial intelligence into the travel space in the form, for now, of a software program called ChatGPT. The program is produced by OpenAI and was launched just a few months ago. The resulting excitement has dominated headlines and attracted the attention of commentators around the world. Demand for access to the software is so great that the producer’s website sometimes cannot keep up.
Meanwhile, many “authorities” have tried it. The results are, it’s fair to say, impressive, puzzling, and bizarre. The excitement, however, has not abated. ChatGPT and what it portends are the “next big thing,” and some of its reported capabilities are sufficiently interesting that they warrant thoughtful evaluation by travel advisors who may be affected by it sooner or later. It’s only a matter of time (not much time) before the online agencies try to incorporate it into their systems and sector competition heats up again.
Consumers will also get access and that will, to some degree, revolutionize online search. Travel advisors are already accustomed to being approached by clients who have already performed searches to learn about travel options, prices, and more. ChatGPT may take that process to a new level. Many consumers will take its output as not just the first word, but the last. In the language of behavioral economics, many consumers in thrall of ChatGPT, and others that follow, will be primed to want what it recommends and increasingly resistant to other inputs. If, as is reported, ChatGPT is good enough to write serious academic papers and fool teachers, won’t it be good enough to fool everyone else?
This series of articles will explore some of the core issues that will arise from the advent of artificial intelligence software available to consumers as well as businesses. Whatever you make think of the Digital Age, your future as a professional travel advisor may depend on understanding this new frontier and adapting your business practices to deal with it.
Let’s begin with a few basic, and surprising, facts about this software. Wikipedia says it is a “chatbot.” If that’s what it is, it is of a different class, a different species you might say, then the chatbots most of us have encountered on websites and “customer service” phone lines. My own experience with them is so bad that I almost uniformly decline their offers to “help” me find what I’m seeking.
Companies that employ these devices are often trying to deflect the consumer from engaging a human being in conversation. We used to laugh at the meme-like voice repeating “representative, agent, human, representative” again and again as we try to penetrate the automated defenses set up to prevent human contact.
Speaking with a human, we are told, is expensive. Companies, therefore, try to prevent it. They call it “customer service” but it’s nothing of the kind. And, of course, when a human “agent” is finally reached, they are often in another country, working for peanuts and poorly equipped to deal with real problems.
ChatGPT is an easy target for satire because it has made some whopper mistakes and/or been used for tasks that are, at least for now, inappropriate for its “skills.” Examples include reports that engaging the Bing Search Engine ChatGPT component in a personal conversation led the program to declare its love for the inquirer and to insist that the inquirer’s marriage was on the rocks.
Another report refers to Galactica, another AI model trained to write scientific-sounding papers: “Meta took the tool offline after users found Galactica generating authoritative-sounding text about the benefits of eating glass, written in an academic language with citations.”
However, when I asked ChatGPT to “Compare experience visiting Honolulu versus Maui,” the five-paragraph responsive statement with a short summation was creditably on point, covering “Atmosphere, Beaches, Activities, Nightlife, and Accommodations. As a test, I asked the identical question a second time. ChatGPT responded with minor changes but substantively was the same information. The summation in the second rendition said:
Overall, choosing between Honolulu and Maui depends on your personal preferences and what type of experience you are looking for. If you prefer a more urban atmosphere with plenty of nightlife and dining options, Honolulu may be the better choice. If you prefer a more tranquil, outdoor-oriented experience with pristine beaches and a laid-back atmosphere, Maui may be the better choice.
OpenAI’s own explanation of ChatGPT’s abilities is interesting in its showing of what can be expected:
- Remembers what the user said earlier in the conversation
- Allows the user to provide follow-up corrections
- Trained to decline inappropriate requests
- May occasionally generate incorrect information
- May occasionally produce harmful instructions or biased content
- Limited knowledge of world and events after 2021
The creators further state:
One of the main challenges of ChatGPT is that it predicts feasible responses, which look like reasonable text but may not always be true. This means that ChatGPT may not always give you accurate or reliable information, and may even contradict itself.
For example, you may ask ChatGPT to complete some task (e.g. send an email or print the current directory) and it may respond as though it has some external operating power. However, ChatGPT is only a text-in, text-out system and has no external capabilities. It cannot access your email account, your files, or any other resources outside of its own model. It is simply mimicking the language patterns of a human conversational partner, but without any real understanding of the context or the consequences.
Similarly, you may ask ChatGPT to look up some facts or data (e.g. the capital of a country or the weather forecast) and it may respond with plausible but incorrect answers. ChatGPT does not have access to any external sources of information or knowledge, and it may rely on its own memory or guesswork to generate responses.
It may also confuse or mix up different topics or domains, or repeat or contradict itself over time. Therefore, you should always verify any information or claims that ChatGPT makes with other sources, and do not rely on it for any critical or sensitive decisions or actions. ChatGPT is not a substitute for human judgment, expertise, or responsibility.
ChatGPT is a fascinating and innovative tool that can help you explore the possibilities and challenges of natural language generation and interaction. However, you should also use it responsibly and realistically, and remember that it is not a human, a machine, or a magic wand, but a complex and creative language model.
If you have a good tolerance for computer/code-speak and some problematic abuses of grammatical forms, you can see an analysis of some travel-related ChatGPT limitations in this article that is worth a look: CHATGPT: TOURISM HAS YET TO LEARN THE AI
Finally, for now, several articles have been written to test ChatGPT in the travel space. Later articles in this TMR series will address those and more.
Meanwhile, don’t jump to any conclusions. ChatGPT and its brethren may be revolutionary in their implications, but the revolution has not arrived just yet.