Ir para o conteúdo principal
Together AI logo

Together AI

Fast inference for open source AI models

4.5

Fast inference for open source LLMs with OpenAI-compatible API. Run Llama, Mistral, and more models at competitive prices. Fine-tuning support included.

Recursos

OpenAI-compatible API
100+ open source models
Fastest Llama inference
Fine-tuning service
Function calling
JSON mode
Streaming responses
Embeddings API

Prós

  • + OpenAI-compatible drop-in
  • + Fastest open source model inference
  • + Competitive pricing
  • + Wide model selection
  • + Fine-tuning support

Contras

  • Fewer enterprise features
  • Limited geographic regions
  • Newer platform
  • Some model availability limits