Metrics
Evaluation modules.
MRR #
Bases: BaseRetrievalMetric
MRR metric.
Source code in llama-index-core/llama_index/core/evaluation/retrieval/metrics.py
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
|
compute #
compute(query: Optional[str] = None, expected_ids: Optional[List[str]] = None, retrieved_ids: Optional[List[str]] = None, expected_texts: Optional[List[str]] = None, retrieved_texts: Optional[List[str]] = None, **kwargs: Any) -> RetrievalMetricResult
Compute metric.
Source code in llama-index-core/llama_index/core/evaluation/retrieval/metrics.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
|
HitRate #
Bases: BaseRetrievalMetric
Hit rate metric.
Source code in llama-index-core/llama_index/core/evaluation/retrieval/metrics.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
|
compute #
compute(query: Optional[str] = None, expected_ids: Optional[List[str]] = None, retrieved_ids: Optional[List[str]] = None, expected_texts: Optional[List[str]] = None, retrieved_texts: Optional[List[str]] = None, **kwargs: Any) -> RetrievalMetricResult
Compute metric.
Source code in llama-index-core/llama_index/core/evaluation/retrieval/metrics.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
|
RetrievalMetricResult #
Bases: BaseModel
Metric result.
Attributes:
Name | Type | Description |
---|---|---|
score |
float
|
Score for the metric |
metadata |
Dict[str, Any]
|
Metadata for the metric result |
Source code in llama-index-core/llama_index/core/evaluation/retrieval/metrics_base.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
resolve_metrics #
resolve_metrics(metrics: List[str]) -> List[Type[BaseRetrievalMetric]]
Resolve metrics from list of metric names.
Source code in llama-index-core/llama_index/core/evaluation/retrieval/metrics.py
137 138 139 140 141 142 143 |
|