Future Tech

Anthropic delivers Claude 3.5 model – and a new way to work with chatbots

Tan KW
Publish date: Fri, 21 Jun 2024, 08:22 AM
Tan KW
0 449,169
Future Tech

Video OpenAI challenger Anthropic has delivered its latest model - Claude 3.5 Sonnet - and claimed it outperforms rivals on many tasks.

Anthropic delivered the model - the first release of the Claude 3.5 family - with a Thursday announcement in which the outfit claimed higher performance than OpenAI's GPT-4o, Google's Gemini 1.5 Pro, and an early snapshot of Meta's recently announced Llama3-400B models, using a variety of knowledge-based benchmarks depicted in the table below.

Anthropic, built by ex-OpenAI staff and others including former Register vulture Jack Clark, also contends that Claude 3.5 Sonnet, which we'll just call Sonnet 3.5 from here on out, has a better grasp of humor and is therefore easier to work with. That and other improvements, Anthropic claims, mean the model is more reliable when asked to implement complex instructions.

Below is a video from the San Francisco upstart demoing its tech.

The release also introduced a feature to the Claude.ai chatbot, called “Artifacts” that sends content produced by the program to a dedicated window that Anthropic described as “a dynamic workspace where they [users] can see, edit, and build upon Claude’s creations in real-time, seamlessly integrating AI-generated content into their projects and workflows.”

"In the near future, teams - and eventually entire organizations - will be able to securely centralize their knowledge, documents, and ongoing work in one shared space, with Claude serving as an on-demand teammate," Team Anthropic boasted.

This feature is no doubt helped by the fact Sonnet 3.5 maintains its predecessor's 200,000 token context window, which you can think of as the model's short-term memory.

Sonnet's vision processing powers have gained better abilities to pick out text from complex images, and to interpret graphs and charts. If Anthropic is to be believed, Sonnet 3.5 comes out on top in all but visual question-answering when pitted against GPT-4o and Gemini 1.5 in vision workloads.

Safety and privacy remain central tenets for the startup, which has assigned its latest model an AI Safety Level of 2 (ASL-2). Anthropic associates higher scores with more dangerous capabilities. This rating refers to models that "show early signs of dangerous capabilities," such as the ability to teach someone how to create biological weapons, but falls short of providing information that a search engine couldn't.

To maintain the safety and privacy of its models, Anthropic also incorporated feedback from the UK's Artificial Intelligence Safety Institute, and Thorn - an org which specializes in protecting children online - to fine tune the model.

Sonnet 3.5 is available in Anthropic's web and mobile apps, while developers can integrate the model into their projects using APIs, Amazon Bedrock, or Google Vertex AI. API access will set you back $3 for every million input tokens and $15 for every million output tokens generated.

Anthropic plans to add more models to the Claude 3.5 family, with Haiku and Opus variants slated for later this year. The model builder has already begun work on its next-generation of AI models, which will integrate new features such as memory to further expand its capabilities.

As always with these LLMs, they do hallucinate and will get things wrong. They also do have their uses. YMMV. ®

PS: Away from the marketing and into the science, you may be interested in research Anthropic put out last month that described in interesting detail the way in which its models work internally. The paper goes into the math with examples.

 

https://www.theregister.com//2024/06/20/anthropic_claude_35/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment