Back to Tutorials
AdvancedTutorial 15

Model Evaluation: Metrics and Testing

NeuronDB Team
2/24/2025
26 min read

Model Evaluation Overview

Model evaluation measures model performance. It uses appropriate metrics. It tests on unseen data. It compares different models. It guides improvements.

Evaluation requires separate test data. It uses relevant metrics. It considers business goals. It provides actionable insights.

Model Evaluation
Figure: Model Evaluation

The diagram shows evaluation process. Models make predictions. Predictions compare to actuals. Metrics compute performance.

Classification Metrics

Classification metrics measure classification performance. Accuracy measures overall correctness. Precision measures prediction quality. Recall measures coverage. F1 balances precision and recall.

Accuracy is (TP + TN) / (TP + TN + FP + FN). It works for balanced classes. It fails for imbalanced classes. Precision is TP / (TP + FP). It measures prediction quality. Recall is TP / (TP + FN). It measures coverage.

# Classification Metrics
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix
y_true = [0, 1, 1, 0, 1, 0, 1, 1]
y_pred = [0, 1, 1, 0, 0, 0, 1, 1]
accuracy = accuracy_score(y_true, y_pred)
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)
f1 = f1_score(y_true, y_pred)
cm = confusion_matrix(y_true, y_pred)
print("Accuracy: " + str(accuracy))
print("Precision: " + str(precision))
print("Recall: " + str(recall))
print("F1: " + str(f1))
print("Confusion Matrix:")
print(cm)

Choose metrics based on goals. Accuracy for balanced classes. Precision for reducing false positives. Recall for reducing false negatives. F1 for balanced performance.

Confusion Matrix
Figure: Confusion Matrix

The diagram shows confusion matrix structure. True positives and true negatives are correct predictions. False positives and false negatives are errors. Matrix enables detailed analysis.

Classification Metrics
Figure: Classification Metrics

The diagram shows metric relationships. Each metric captures different aspects. Tradeoffs exist between metrics.

Regression Metrics

Regression metrics measure prediction errors. MSE emphasizes large errors. MAE treats all errors equally. R-squared measures explained variance. RMSE provides interpretable units.

MSE is (1/n) Σ(y_pred - y_true)². It penalizes large errors. MAE is (1/n) Σ|y_pred - y_true|. It treats errors equally. R² is 1 - (SS_res / SS_tot). It measures fit quality.

# Regression Metrics
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
import numpy as np
y_true = [100, 200, 300, 400, 500]
y_pred = [110, 190, 310, 390, 510]
mse = mean_squared_error(y_true, y_pred)
mae = mean_absolute_error(y_true, y_pred)
r2 = r2_score(y_true, y_pred)
rmse = np.sqrt(mse)
print("MSE: " + str(mse))
print("MAE: " + str(mae))
print("R²: " + str(r2))
print("RMSE: " + str(rmse))

Choose metrics based on needs. MSE for large error sensitivity. MAE for robustness. R² for fit quality. RMSE for interpretability.

Regression Metrics
Figure: Regression Metrics

The diagram shows regression metrics. Predictions compared to actual values. Errors measured by different functions. Each metric emphasizes different aspects.

Embedding Quality Metrics

Embedding quality metrics measure embedding effectiveness. They test semantic relationships. They evaluate downstream performance. They guide embedding selection.

Metrics include similarity correlation, analogy accuracy, and downstream task performance. Similarity correlation measures human similarity agreement. Analogy accuracy tests relationships. Downstream performance measures task effectiveness.

# Embedding Quality Metrics
def evaluate_embeddings(embeddings, test_pairs, analogies):
# Similarity correlation
similarities = compute_similarities(embeddings, test_pairs)
correlation = compute_correlation(similarities, human_similarities)
# Analogy accuracy
analogy_acc = solve_analogies(embeddings, analogies)
# Downstream performance
downstream_score = evaluate_downstream(embeddings)
return {
'similarity_correlation': correlation,
'analogy_accuracy': analogy_acc,
'downstream_score': downstream_score
}

Embedding metrics guide selection. They measure semantic quality. They predict downstream performance.

A/B Testing Frameworks

A/B testing compares model variants. It measures performance differences. It determines statistical significance. It guides deployment decisions.

A/B testing splits traffic between variants. It collects performance metrics. It tests for significant differences. It selects better variant.

# A/B Testing
from scipy import stats
def ab_test(variant_a_scores, variant_b_scores, alpha=0.05):
# Statistical test
t_stat, p_value = stats.ttest_ind(variant_a_scores, variant_b_scores)
# Mean comparison
mean_a = np.mean(variant_a_scores)
mean_b = np.mean(variant_b_scores)
# Decision
significant = p_value < alpha
better = 'B' if mean_b > mean_a else 'A'
return {
'significant': significant,
'p_value': p_value,
'mean_a': mean_a,
'mean_b': mean_b,
'better_variant': better
}
# Example
scores_a = [0.85, 0.87, 0.86, 0.84, 0.88]
scores_b = [0.90, 0.91, 0.89, 0.92, 0.90]
result = ab_test(scores_a, scores_b)
print("A/B test result: " + str(result))

A/B testing enables data-driven decisions. It measures real differences. It guides deployments.

A/B Testing
Figure: A/B Testing

The diagram shows A/B testing process. Traffic split between variants. Metrics collected for each. Statistical significance tested. Better variant selected for deployment.

Evaluation Suites

Evaluation suites provide comprehensive testing. They include multiple metrics. They test various aspects. They enable thorough evaluation.

Suites combine classification, regression, and embedding metrics. They test robustness. They measure generalization. They provide complete picture.

# Evaluation Suite
class EvaluationSuite:
def __init__(self):
self.metrics = []
def add_metric(self, metric_func, name):
self.metrics.append((name, metric_func))
def evaluate(self, y_true, y_pred):
results = {}
for name, metric_func in self.metrics:
results[name] = metric_func(y_true, y_pred)
return results
# Example
suite = EvaluationSuite()
suite.add_metric(accuracy_score, 'accuracy')
suite.add_metric(precision_score, 'precision')
suite.add_metric(recall_score, 'recall')
suite.add_metric(f1_score, 'f1')
results = suite.evaluate(y_true, y_pred)
print("Evaluation results: " + str(results))

Evaluation suites provide comprehensive assessment. They test multiple aspects. They enable thorough evaluation.

Cross-Validation
Figure: Cross-Validation

The diagram shows cross-validation process. Data splits into k folds. Each fold serves as test set. Results average across folds. Provides robust performance estimates.

Summary

Model evaluation measures performance. Classification metrics measure classification quality. Regression metrics measure prediction errors. Embedding metrics measure embedding quality. A/B testing compares variants. Evaluation suites provide comprehensive testing. Good evaluation guides improvements.

References

Related Tutorials