World

New ChatGPT challenger emerges with 'Claude'


SAN FRANCISCO – Anthropic, a Google-backed artificial-intelligence startup, is making its rival chatbot to OpenAI’s popular ChatGPT available to businesses that want to add it to their products.

The startup, created in 2021 by former leaders of OpenAI, including siblings Daniela and Dario Amodei, said the chatbot, named Claude, has been tested the past few months by tech companies such as Notion Labs, Quora and search engine DuckDuckGo. Quora, for instance, included the chatbot with an app called Poe, which lets users ask questions.

Companies that want to use Claude can sign up via a waiting list. Anthropic aims to offer access within days of the request.

The startup also is offering a version called Claude Instant, which is less powerful but cheaper and speedier. Earlier this month, OpenAI released ChatGPT for businesses.

Although chatbots themselves are by no means new, Claude is one of a breed of much more powerful tools that have been trained on massive swathes of the internet to generate text that mimics human speech far better than their predecessors.

Such tools are an application of generative AI, which refers to artificial intelligence systems that consider input such as a text prompt and use it to output new content such as text or images.

OpenAI released ChatGPT for widespread testing in November, unleashing a stampede of tech companies unveiling their own chatbots.

In February, Google said it started testing its version, Bard, while Microsoft Corp. which has invested US$11 billion (S$14.8 billion) in OpenAI, added a chatbot based on the startup’s technology to its Bing search engine. Google has invested almost US$400 million in Anthropic, Bloomberg reported in February.

Similar to ChatGPT, Claude is a large language model that can be used for a range of written tasks like summarising, searching, answering questions and coding.

Yet while ChatGPT has faced criticism – and been tweaked – after offering users some disturbing results, Anthropic is positioning its chatbot as more cautious from the start. Essentially, it’s meant to be harder to wring offensive results from it.

Anthropic Chief Executive Officer Dario Amodei said the startup has been slowly rolling out tests of Claude.

“I don’t want to say all the problems have been solved,” he said. “I think all of these models, including ours, they sometimes hallucinate, they sometimes make things up.”

When used recently via Quora’s Poe app, Claude was easy to converse with, offered snappy answers, and, when a tester was unhappy with its answers, responded apologetically.

For instance, in one exchange via the Poe app, the chatbot was asked to suggest nicknames for a daughter and then for a son. When the bot was questioned about the results – which included champ, buddy and tiger for a boy and sweet pea, princess and angel for a girl – Claude acknowledged its suggestions “fell into some gender stereotypes”.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.