AI Studio
  • AI Studio Guide
  • AI Studio Basics
    • About AI Studio
      • AI Studio Feature Guide
      • Problem Statements
      • Platform Use Cases
    • Key Terminology to Know
  • Building Your First Project
    • Image Tagging
  • Detailed Guide: Image Tagging
    • Your Data
      • Supported Formats & Image Specifications
      • Exploring Datasets
        • Creating New Datasets
        • Adding Data
        • Removing Data
      • Pre-Processing Results
    • Creating and Training Models
      • Training Basics
    • Evaluating Models
      • Default Evaluation Dataset
      • Interpreting Evaluation
        • Interpreting results
      • Improving Your Model
    • Inference
    • Deployment
Powered by GitBook
On this page
  • Suggestions to improve model
  • Mistakes to avoid
  • Useful Tips

Was this helpful?

  1. Detailed Guide: Image Tagging
  2. Evaluating Models

Improving Your Model

Here are some tips and best practices to improve your model.

The evaluation metrics form the core of the iterative approach adopted by the AI Studio platform. By leveraging the insights and suggestions provided by the platform, users can strengthen the accuracy and robustness of their fashion AI models.

Suggestions to improve model

Users get suggestions to improve models in the Analysis Page. This could be in terms of removing biases by strengthening the diversity of data, re-training models with enhanced datasets, improving quality of inputs and so on.

Mistakes to avoid

  • Ensure correct labelling of data.

  • Ensure using a holistic dataset that is representative of the real time data the model would see after being deployed.

  • Ensure that the edge case data is well represented.

  • Ensure that the dataset captures the distribution holistically.

  • While creating the project, if applicable, ensure that the correct category is selected.

Useful Tips

  • Always create default evaluation datasets to represent the final real time data .

  • At the time of creating a project, a good practice is to build a large and accurate training dataset with the end real time data in mind (> 300 per label) .

  • Create distinct labels. Try to merge labels if you think the difference is minimal. Example - Elbow sleeve, half sleeve & 3/4 sleeve can be combined to half sleeve .

  • Create 2-3 default evaluation datasets before training any model. This way, models created will automatically be evaluated on default evaluation datasets (~ 100 per label) .

  • Use insights to tweak training dataset to create new models .

  • Use the Model Summary page & graphs to understand the progress of the models.

PreviousInterpreting resultsNextInference

Last updated 3 years ago

Was this helpful?