> For clean Markdown of any page, append .md to the page URL.
> For a complete documentation index, see https://developers.meshapi.ai/docs/introduction/llms.txt.
> For full documentation content, see https://developers.meshapi.ai/docs/introduction/llms-full.txt.

# Model Explanation

> Understand the different types of AI models available through the API.

# Understanding Models

MeshAPI acts as a router, forwarding your standardized API calls to an expansive list of underlying foundational models. A **Model** represents a distinct neural network trained by various AI organizations (like OpenAI, Google, Anthropic, Meta, etc.).

## Base Models

Base models are the standard foundational engines you chat with. They take a series of messages and output a sequence of text.

### Naming Convention

Model names typically follow a prefix structure: `<provider>/<model_name>`.
For example:

* `openai/gpt-4o-mini`
* `anthropic/claude-3-haiku`
* `meta-llama/llama-3.1-8b-instruct`

## Capabilities & Context Limits

Each model handles a different maximum "context limit" – the number of tokens (words/characters) it can process in a single request.

* Fast, small models (e.g. `llama-3`) may have smaller limits but respond instantly.
* Large models (e.g. `gpt-4o`) can handle massive documents and are heavily optimized for reasoning but take slightly longer to generate tokens.

## Choosing the Right Model

When deciding what model to use for your application, consider these factors:

* **Cost:** Do you need high intelligence, or just rapid categorization?
* **Speed (Latency):** Lighter models offer much lower time-to-first-token.
* **Context Length:** If you are passing an entire codebase or large PDF, ensure the model supports large contexts.

You can view the full dynamic list of supported models in our live Model Catalog.