AI chatbots are coming to search engines – can you trust the results? Source: nature

Nature

  1. nature  
  2. news explainer  
  3. article
  • NEWS EXPLAINER
  • 13 February 2023

AI chatbots are coming to search engines – can you trust the results?

Google, Microsoft and Baidu are using tools like ChatGPT to turn internet search into a conversation. How will this change humanity’s relationship with machines?

Artificial Intelligence, chatbot concept. Cartoon of chatbot on a laptop.
Research has shown that the more human-like a chatbot seems, the more people trust it.Credit: Getty

Months after the chatbot ChatGPT wowed the world with its uncanny ability to write essays and answer questions like a human, artificial intelligence (AI) is coming to Internet search.

Three of the world’s biggest search engines — Google, Bing and Baidu — last week said they will be integrating ChatGPT or similar technology into their search products, allowing people to get direct answers or engage in a conversation, rather than merely receiving a list of links after typing in a word or question. How will this change the way people relate to search engines? Are there risks to this form of human–machine interaction?

Microsoft’s Bing uses the same technology as ChatGPT, which was developed by OpenAI of San Francisco, California. But all three companies are using large language models (LLMs). LLMs create convincing sentences by echoing the statistical patterns of text they encounter in a large database. Google’s AI-powered search engine, Bard, announced on 6 February, is currently in use by a small group of testers. Microsoft’s version is widely available now, although there is a waiting list for unfettered access. Baidu’s ERNIE Bot will be available in March.

Before these announcements, a few smaller companies had already released AI-powered search engines. “Search engines are evolving into this new state, where you can actually start talking to them, and converse with them like you would talk to a friend,” says Aravind Srinivas, a computer scientist in San Francisco who last August co-founded Perplexity — an LLM-based search engine that provides answers in conversational English.

Changing trust

The intensely personal nature of a conversation — compared with a classic Internet search — might help to sway perceptions of search results. People might inherently trust the answers from a chatbot that engages in conversation more than those from a detached search engine, says Aleksandra Urman, a computational social scientist at the University of Zurich in Switzerland.

2022 study1 by a team based at the University of Florida in Gainesville found that for participants interacting with chatbots used by companies such as Amazon and Best Buy, the more they perceived the conversation to be human-like, the more they trusted the organization.

That could be beneficial, making searching faster and smoother. But an enhanced sense of trust could be problematic given that AI chatbots make mistakes. Google’s Bard flubbed a question about the James Webb Space Telescope in its own tech demo, confidently answering incorrectly. And ChatGPT has a tendency to create fictional answers to questions to which it doesn’t know the answer — known by those in the field as hallucinating.

A Google spokesperson said Bard’s error “highlights the importance of a rigorous testing process, something that we’re kicking off this week with our trusted-tester programme”. But some speculate that, rather than increasing trust, such errors, assuming they are discovered, could cause users to lose confidence in chat-based search. “Early perception can have a very large impact,” says Mountain View, California-based computer scientist Sridhar Ramaswamy, CEO of Neeva, an LLM-powered search engine launched in January. The mistake wiped $100 billion from Google’s value as investors worried about the future and sold stock.

Lack of transparency

Compounding the problem of inaccuracy is a comparative lack of transparency. Typically, search engines present users with their sources — a list of links — and leave them to decide what they trust. By contrast, it’s rarely known what data an LLM trained on — is it Encyclopaedia Britannica or a gossip blog?

“It’s completely untransparent how [AI-powered search] is going to work, which might have major implications if the language model misfires, hallucinates or spreads misinformation,” says Urman.

If search bots make enough errors, then, rather than increasing trust with their conversational ability, they have the potential to instead unseat users’ perceptions of search engines as impartial arbiters of truth, Urman says.

She has conducted as-yet unpublished research that suggests current trust is high. She examined how people perceive existing features that Google uses to enhance the search experience, known as ‘featured snippets’, in which an extract from a page that is deemed particularly relevant to the search appears above the link, and ‘knowledge panels’ — summaries that Google automatically generates in response to searches about, for example, a person or organization. Almost 80% of people Urman surveyed deemed these features accurate, and around 70% thought they were objective.

Chatbot-powered search blurs the distinction between machines and humans, says Giada Pistilli, principal ethicist at Hugging Face, a data-science platform in Paris that promotes the responsible use of AI. She worries about how quickly companies are adopting AI advances: “We always have these new technologies thrown at us without any control or an educational framework to know how to use them.”

doi: https://doi.org/10.1038/d41586-023-00423-4

References

  1. Lu, L. et al. Comp. Hum. Behav. 128, 107092 (2022).

About michelleclarke2015

Life event that changes all: Horse riding accident in Zimbabwe in 1993, a fractured skull et al including bipolar anxiety, chronic fatigue …. co-morbidities (Nietzche 'He who has the reason why can deal with any how' details my health history from 1993 to date). 17th 2017 August operation for breast cancer (no indications just an appointment came from BreastCheck through the Post). Trinity College Dublin Business Economics and Social Studies (but no degree) 1997-2003; UCD 1997/1998 night classes) essays, projects, writings. Trinity Horizon Programme 1997/98 (Centre for Women Studies Trinity College Dublin/St. Patrick's Foundation (Professor McKeon) EU Horizon funded: research study of 15 women (I was one of this group and it became the cornerstone of my journey to now 2017) over 9 mth period diagnosed with depression and their reintegration into society, with special emphasis on work, arts, further education; Notes from time at Trinity Horizon Project 1997/98; Articles written for Irishhealth.com 2003/2004; St Patricks Foundation monthly lecture notes for a specific period in time; Selection of Poetry including poems written by people I know; Quotations 1998-2017; other writings mainly with theme of social justice under the heading Citizen Journalism Ireland. Letters written to friends about life in Zimbabwe; Family history including Michael Comyn KC, my grandfather, my grandmother's family, the O'Donnellan ffrench Blake-Forsters; Moral wrong: An acrimonious divorce but the real injustice was the Catholic Church granting an annulment – you can read it and make your own judgment, I have mine. Topics I have written about include annual Brain Awareness week, Mashonaland Irish Associataion in Zimbabwe, Suicide (a life sentence to those left behind); Nostalgia: Tara Hill, Co. Meath.
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s