Model Explanation
Understanding Models
MeshAPI acts as a router, forwarding your standardized API calls to an expansive list of underlying foundational models. A Model represents a distinct neural network trained by various AI organizations (like OpenAI, Google, Anthropic, Meta, etc.).
Base Models
Base models are the standard foundational engines you chat with. They take a series of messages and output a sequence of text.
Naming Convention
Model names typically follow a prefix structure: <provider>/<model_name>.
For example:
openai/gpt-4o-minianthropic/claude-3-haikumeta-llama/llama-3.1-8b-instruct
Capabilities & Context Limits
Each model handles a different maximum “context limit” – the number of tokens (words/characters) it can process in a single request.
- Fast, small models (e.g.
llama-3) may have smaller limits but respond instantly. - Large models (e.g.
gpt-4o) can handle massive documents and are heavily optimized for reasoning but take slightly longer to generate tokens.
Choosing the Right Model
When deciding what model to use for your application, consider these factors:
- Cost: Do you need high intelligence, or just rapid categorization?
- Speed (Latency): Lighter models offer much lower time-to-first-token.
- Context Length: If you are passing an entire codebase or large PDF, ensure the model supports large contexts.
You can view the full dynamic list of supported models in our live Model Catalog.