AI Studio
  • AI Studio Guide
  • AI Studio Basics
    • About AI Studio
      • AI Studio Feature Guide
      • Problem Statements
      • Platform Use Cases
    • Key Terminology to Know
  • Building Your First Project
    • Image Tagging
  • Detailed Guide: Image Tagging
    • Your Data
      • Supported Formats & Image Specifications
      • Exploring Datasets
        • Creating New Datasets
        • Adding Data
        • Removing Data
      • Pre-Processing Results
    • Creating and Training Models
      • Training Basics
    • Evaluating Models
      • Default Evaluation Dataset
      • Interpreting Evaluation
        • Interpreting results
      • Improving Your Model
    • Inference
    • Deployment
Powered by GitBook
On this page
  • Accuracy Score
  • Success Images
  • Failed Images
  • Confusion Matrix
  • Violin Plot
  • TSNE Plot
  • PR (Precision-Recall) Score

Was this helpful?

  1. Detailed Guide: Image Tagging
  2. Evaluating Models
  3. Interpreting Evaluation

Interpreting results

Leverage the evaluation metrics to improve your model.

The AI studio platform offers a range of evaluation metrics that throw light on performance, accuracy and error rates of the fashion AI model.

Accuracy Score

The accuracy score indicates the level of accuracy exhibited by the model in identifying and tagging the images with the right labels in Image Tagging projects.

AI Studio presents users with not just the overall accuracy percentage of the model but the accuracy scores by label. Using these insights, users can finetune the model by label. If one of the labels has a lower accuracy rate, the user can re-train the model by iterating the data for that label.

(Insert Image)

Success Images

Success Images suggest the number of fashion images in the dataset that were accurately identified and tagged by the trained model. By clicking on View Results, users can see all the images that were successfully tagged.

(Insert Image)

Failed Images

Failed Images suggest the number of images were not accurately identified and tagged by the trained fashion AI model. By clicking on View Failed images, users can see all images and explore why the images failed. To explore why the image failed, click on Explore Why by hovering over each image. Using these insights, you can iterate the dataset and re-train the model.

(Insert Image)

Confusion Matrix

The Confusion Matrix is a tabular representation of the performance of an ML classification problem on the test dataset for which true values are known. In AI Studio Image Tagging projects, the Confusion Matrix tells the user how often the model is classified and tagged correctly as opposed to when the labels were confused and as what label. Users gain valuable insights on precision, recall, false positives, false negatives, biases and so on from this table.

(Insert Image)

The percentages highlighted in green were correctly tagged while those in red were confused by the model.

Violin Plot

Violin Plots enable users to understand the distribution of numeric data for different labels using density curves and are accompanied by box plots. The thickest part of the Violin Plot represents the level of denser distribution of data.

(Insert Image)

TSNE Plot

TSNE Plots help to simplify large datasets graphically using the dimensionality reduction technique. Each label is accorded a color and the data points are plotted onto the graph to form clusters. Users can see the outliers, the accuracy levels and error rates graphically.

(Insert Image)

PR (Precision-Recall) Score

The PR Score indicates the performance of each of the labels in terms of its precision or Positive Predictive Value and recall or hit rate/ sensitivity/ True Positive Rate. Higher scores indicate that the model is returning more accurate results.

The Precision-Recall Curve is a graphical representation of the between precision and recall values of each of the labels of the AI model.

(Insert Image)

PreviousInterpreting EvaluationNextImproving Your Model

Last updated 3 years ago

Was this helpful?