Skip to main content

Overview

Our foundation model family are generalized across multiple domains and can be used for various downstream tasks. Optimization can be done by:

  • Prompt Engineering: either manual experimentation or automated search
  • Fine-tuning: leverage your own data and our expertise to optimize performance
  • Improve context: provide more relevant and high-quality data

Optimizing

Prompt Engineering

Prompt engineering involves optimizing the template and context to maximize model performance. This can be done through manual experimentation where you test different prompt variations, or through automated search methods that systematically explore prompt combinations. Key aspects include refining the instruction clarity, context structure, and variable placement to achieve better recommendation accuracy.

Fine-tuning

Fine-tuning leverages your own data and our expertise to optimize performance for your specific domain and use case. This process involves training the pre-trained models on your proprietary data to improve accuracy and relevance for your specific business needs.

Improve Context

Providing more relevant and high-quality data can significantly enhance model performance. This involves curating comprehensive and diverse datasets that capture the nuances of your domain, ensuring the model has sufficient examples to learn from and generalize effectively.

Model Family

  • PRAG family:
    • 'prag_v1': released on 2026-01-24