Tutorial — Anthropic API

Prompt engineering with Anthropic API

Teemu Maatta
8 min readApr 5


Anthropic API offers early-access to cutting-edge Large Language Models (LLMs) by the highly-cited AI research team.

Photo by vahid kanani on Unsplash


Anthropic researchers have shaped the recent progress in AI research including: Scaling laws¹, GPT-3², CLIP³ and the RLHF⁴ and RLAIF⁵. The recent paper on AI safety⁶ is a must read. I covered recently as well Pretraining with Human Feedback⁶.

It seems obvious, that Anthropic is well positioned to continue developing State-of-the-Art (SOTA) models in the future. However, I will not cover in this article about their research, but their early-access API.

I will help to get started with the Anthropic API, models available, getting started with the API and few prompt engineering examples.

Let’s get started!

Claude Console

The simplest option to experience Claude chatbot is through Slack- and Poe-apps. Once you obtain the access, I can use Claude directly from the Claude console.

Claude console. Image by Author.

Anthropic API

Anthropic API offers developers the possibility to integrate LLMs inside workflows. The evaluation access is currently free. Anybody can request early access to Anthropic API. I remind not to use confidential or sensitive information on the evaluation use. Commercial usage is separately agreed with Anthropic sales.

Anthropic commercial deployment. Image by Author.

The Terms of Service indicate, that both the model Inputs and Outputs belong to you. I recommend to check as well the Privacy policy and Responsible Disclosure Policy.

Anthropic API supports currently context window of approximately: 8k tokens. Input prompt tokens are cheaper than the output tokens.

Let’s build our first prompt!

Getting started with the Anthropic API



Teemu Maatta

Author (+200k views) in Artificial General Intelligence. Prompt Engineer. Madrid.