评估
sentence_transformers.sparse_encoder.evaluation
定义了可在训练期间用于评估 SparseEncoder 模型的不同类。
SparseInformationRetrievalEvaluator
- class sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator(queries: dict[str, str], corpus: dict[str, str], relevant_docs: dict[str, set[str]], corpus_chunk_size: int = 50000, mrr_at_k: list[int] = [10], ndcg_at_k: list[int] = [10], accuracy_at_k: list[int] = [1, 3, 5, 10], precision_recall_at_k: list[int] = [1, 3, 5, 10], map_at_k: list[int] = [100], show_progress_bar: bool = False, batch_size: int = 32, name: str = '', write_csv: bool = True, max_active_dims: int | None = None, score_functions: dict[str, Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] | None = None, main_score_function: str | SimilarityFunction | None = None, query_prompt: str | None = None, query_prompt_name: str | None = None, corpus_prompt: str | None = None, corpus_prompt_name: str | None = None, write_predictions: bool = False)[source]
此评估器扩展了
InformationRetrievalEvaluator
,但专为稀疏编码器模型设计。该类评估信息检索(IR)设置。
给定一组查询和一个大型语料库。它将为每个查询检索前 k 个最相似的文档。它测量平均倒数排名 (MRR)、Recall@k 和归一化折损累积增益 (NDCG)。
- 参数:
queries (Dict[str, str]) – 将查询 ID 映射到查询的字典。
corpus (Dict[str, str]) – 将文档 ID 映射到文档的字典。
relevant_docs (Dict[str, Set[str]]) – 将查询 ID 映射到一组相关文档 ID 的字典。
corpus_chunk_size (int) – 语料库每个块的大小。默认为 50000。
mrr_at_k (List[int]) – 用于 MRR 计算的 k 值列表。默认为 [10]。
ndcg_at_k (List[int]) – 用于 NDCG 计算的 k 值列表。默认为 [10]。
accuracy_at_k (List[int]) – 用于准确率计算的 k 值列表。默认为 [1, 3, 5, 10]。
precision_recall_at_k (List[int]) – 用于精确率和召回率计算的 k 值列表。默认为 [1, 3, 5, 10]。
map_at_k (List[int]) – 用于 MAP 计算的 k 值列表。默认为 [100]。
show_progress_bar (bool) – 评估期间是否显示进度条。默认为 False。
batch_size (int) – 评估的批处理大小。默认为 32。
name (str) – 评估的名称。默认为空字符串。
write_csv (bool) – 是否将评估结果写入 CSV 文件。默认为 True。
max_active_dims (Optional[int], optional) – 要使用的最大活跃维度数。
None
表示使用模型的当前max_active_dims
。默认为 None。score_functions (Dict[str, Callable[[Tensor, Tensor], Tensor]]) – 将分数函数名称映射到分数函数的字典。默认为模型中的
similarity
函数。main_score_function (Union[str, SimilarityFunction], optional) – 用于评估的主要分数函数。默认为 None。
query_prompt (str, optional) – 编码语料库时要使用的提示。默认为 None。
query_prompt_name (str, optional) – 编码语料库时要使用的提示名称。默认为 None。
corpus_prompt (str, optional) – 编码语料库时要使用的提示。默认为 None。
corpus_prompt_name (str, optional) – 编码语料库时要使用的提示名称。默认为 None。
write_predictions (bool) – 是否将预测结果写入 JSONL 文件。默认为 False。这对于下游评估很有用,因为它可以作为接受预计算预测的
ReciprocalRankFusionEvaluator
的输入。
示例
import logging import random from datasets import load_dataset from sentence_transformers import SparseEncoder from sentence_transformers.sparse_encoder.evaluation import SparseInformationRetrievalEvaluator logging.basicConfig(format="%(message)s", level=logging.INFO) # Load a model model = SparseEncoder("naver/splade-cocondenser-ensembledistil") # Load the NFcorpus IR dataset (https://hugging-face.cn/datasets/BeIR/nfcorpus, https://hugging-face.cn/datasets/BeIR/nfcorpus-qrels) corpus = load_dataset("BeIR/nfcorpus", "corpus", split="corpus") queries = load_dataset("BeIR/nfcorpus", "queries", split="queries") relevant_docs_data = load_dataset("BeIR/nfcorpus-qrels", split="test") # For this dataset, we want to concatenate the title and texts for the corpus corpus = corpus.map(lambda x: {"text": x["title"] + " " + x["text"]}, remove_columns=["title"]) # Shrink the corpus size heavily to only the relevant documents + 1,000 random documents required_corpus_ids = set(map(str, relevant_docs_data["corpus-id"])) required_corpus_ids |= set(random.sample(corpus["_id"], k=1000)) corpus = corpus.filter(lambda x: x["_id"] in required_corpus_ids) # Convert the datasets to dictionaries corpus = dict(zip(corpus["_id"], corpus["text"])) # Our corpus (cid => document) queries = dict(zip(queries["_id"], queries["text"])) # Our queries (qid => question) relevant_docs = {} # Query ID to relevant documents (qid => set([relevant_cids]) for qid, corpus_ids in zip(relevant_docs_data["query-id"], relevant_docs_data["corpus-id"]): qid = str(qid) corpus_ids = str(corpus_ids) if qid not in relevant_docs: relevant_docs[qid] = set() relevant_docs[qid].add(corpus_ids) # Given queries, a corpus and a mapping with relevant documents, the SparseInformationRetrievalEvaluator computes different IR metrics. ir_evaluator = SparseInformationRetrievalEvaluator( queries=queries, corpus=corpus, relevant_docs=relevant_docs, name="BeIR-nfcorpus-subset-test", show_progress_bar=True, batch_size=16, ) # Run evaluation results = ir_evaluator(model) ''' Queries: 323 Corpus: 3269 Score-Function: dot Accuracy@1: 50.77% Accuracy@3: 64.40% Accuracy@5: 66.87% Accuracy@10: 71.83% Precision@1: 50.77% Precision@3: 40.45% Precision@5: 34.06% Precision@10: 25.98% Recall@1: 6.27% Recall@3: 11.69% Recall@5: 13.74% Recall@10: 17.23% MRR@10: 0.5814 NDCG@10: 0.3621 MAP@100: 0.1838 Model Query Sparsity: Active Dimensions: 40.0, Sparsity Ratio: 0.9987 Model Corpus Sparsity: Active Dimensions: 206.2, Sparsity Ratio: 0.9932 ''' # Print the results print(f"Primary metric: {ir_evaluator.primary_metric}") # => Primary metric: BeIR-nfcorpus-subset-test_dot_ndcg@10 print(f"Primary metric value: {results[ir_evaluator.primary_metric]:.4f}") # => Primary metric value: 0.3621
SparseNanoBEIREvaluator
- class sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator(dataset_names: list[DatasetNameType] | None = None, mrr_at_k: list[int] = [10], ndcg_at_k: list[int] = [10], accuracy_at_k: list[int] = [1, 3, 5, 10], precision_recall_at_k: list[int] = [1, 3, 5, 10], map_at_k: list[int] = [100], show_progress_bar: bool = False, batch_size: int = 32, write_csv: bool = True, max_active_dims: int | None = None, score_functions: dict[str, Callable[[Tensor, Tensor], Tensor]] | None = None, main_score_function: str | SimilarityFunction | None = None, aggregate_fn: Callable[[list[float]], float] = <function mean>, aggregate_key: str = 'mean', query_prompts: str | dict[str, str] | None = None, corpus_prompts: str | dict[str, str] | None = None, write_predictions: bool = False)[source]
此评估器扩展了
NanoBEIREvaluator
,但专为稀疏编码器模型设计。该类评估 SparseEncoder 模型在 NanoBEIR 信息检索数据集集合上的性能。
该集合是一组基于 BEIR 集合的数据集,但规模显著缩小,因此可用于在进行全面评估之前快速评估模型的检索性能。这些数据集可在 Hugging Face 的 NanoBEIR 集合中获取。此评估器将为每个数据集以及平均返回与 InformationRetrievalEvaluator 相同的指标(即 MRR、nDCG、Recall@k)。
- 参数:
dataset_names (List[str]) – 要评估的数据集名称。默认为所有数据集。
mrr_at_k (List[int]) – 用于 MRR 计算的 k 值列表。默认为 [10]。
ndcg_at_k (List[int]) – 用于 NDCG 计算的 k 值列表。默认为 [10]。
accuracy_at_k (List[int]) – 用于准确率计算的 k 值列表。默认为 [1, 3, 5, 10]。
precision_recall_at_k (List[int]) – 用于精确率和召回率计算的 k 值列表。默认为 [1, 3, 5, 10]。
map_at_k (List[int]) – 用于 MAP 计算的 k 值列表。默认为 [100]。
show_progress_bar (bool) – 评估期间是否显示进度条。默认为 False。
batch_size (int) – 评估的批处理大小。默认为 32。
write_csv (bool) – 是否将评估结果写入 CSV 文件。默认为 True。
max_active_dims (Optional[int], optional) – 要使用的最大活跃维度数。
None
表示使用模型的当前max_active_dims
。默认为 None。score_functions (Dict[str, Callable[[Tensor, Tensor], Tensor]]) – 将分数函数名称映射到分数函数的字典。默认为 {SimilarityFunction.COSINE.value: cos_sim, SimilarityFunction.DOT_PRODUCT.value: dot_score}。
main_score_function (Union[str, SimilarityFunction], optional) – 用于评估的主要分数函数。默认为 None。
aggregate_fn (Callable[[list[float]], float]) – 聚合分数的函数。默认为 np.mean。
aggregate_key (str) – 用于聚合分数的键。默认为 “mean”。
query_prompts (str | dict[str, str], optional) – 要添加到查询的提示。如果为字符串,则将相同的提示添加到所有查询。如果为字典,则要求
dataset_names
中的所有数据集都是键。corpus_prompts (str | dict[str, str], optional) – 要添加到语料库的提示。如果为字符串,则将相同的提示添加到所有语料库。如果为字典,则要求
dataset_names
中的所有数据集都是键。write_predictions (bool) – 是否将预测结果写入 JSONL 文件。默认为 False。这对于下游评估很有用,因为它可以作为接受预计算预测的
ReciprocalRankFusionEvaluator
的输入。
示例
import logging from sentence_transformers import SparseEncoder from sentence_transformers.sparse_encoder.evaluation import SparseNanoBEIREvaluator logging.basicConfig(format="%(message)s", level=logging.INFO) # Load a model model = SparseEncoder("naver/splade-cocondenser-ensembledistil") datasets = ["QuoraRetrieval", "MSMARCO"] evaluator = SparseNanoBEIREvaluator( dataset_names=datasets, show_progress_bar=True, batch_size=32, ) # Run evaluation results = evaluator(model) ''' Evaluating NanoQuoraRetrieval Information Retrieval Evaluation of the model on the NanoQuoraRetrieval dataset: Queries: 50 Corpus: 5046 Score-Function: dot Accuracy@1: 92.00% Accuracy@3: 96.00% Accuracy@5: 98.00% Accuracy@10: 100.00% Precision@1: 92.00% Precision@3: 40.00% Precision@5: 24.80% Precision@10: 13.20% Recall@1: 79.73% Recall@3: 92.53% Recall@5: 94.93% Recall@10: 98.27% MRR@10: 0.9439 NDCG@10: 0.9339 MAP@100: 0.9070 Model Query Sparsity: Active Dimensions: 59.4, Sparsity Ratio: 0.9981 Model Corpus Sparsity: Active Dimensions: 61.9, Sparsity Ratio: 0.9980 Information Retrieval Evaluation of the model on the NanoMSMARCO dataset: Queries: 50 Corpus: 5043 Score-Function: dot Accuracy@1: 48.00% Accuracy@3: 74.00% Accuracy@5: 76.00% Accuracy@10: 86.00% Precision@1: 48.00% Precision@3: 24.67% Precision@5: 15.20% Precision@10: 8.60% Recall@1: 48.00% Recall@3: 74.00% Recall@5: 76.00% Recall@10: 86.00% MRR@10: 0.6191 NDCG@10: 0.6780 MAP@100: 0.6277 Model Query Sparsity: Active Dimensions: 45.4, Sparsity Ratio: 0.9985 Model Corpus Sparsity: Active Dimensions: 122.6, Sparsity Ratio: 0.9960 Average Queries: 50.0 Average Corpus: 5044.5 Aggregated for Score Function: dot Accuracy@1: 70.00% Accuracy@3: 85.00% Accuracy@5: 87.00% Accuracy@10: 93.00% Precision@1: 70.00% Recall@1: 63.87% Precision@3: 32.33% Recall@3: 83.27% Precision@5: 20.00% Recall@5: 85.47% Precision@10: 10.90% Recall@10: 92.13% MRR@10: 0.7815 NDCG@10: 0.8060 Model Query Sparsity: Active Dimensions: 52.4, Sparsity Ratio: 0.9983 Model Corpus Sparsity: Active Dimensions: 92.2, Sparsity Ratio: 0.9970 ''' # Print the results print(f"Primary metric: {evaluator.primary_metric}") # => Primary metric: NanoBEIR_mean_dot_ndcg@10 print(f"Primary metric value: {results[evaluator.primary_metric]:.4f}") # => Primary metric value: 0.8060
SparseEmbeddingSimilarityEvaluator
- class sentence_transformers.sparse_encoder.evaluation.SparseEmbeddingSimilarityEvaluator(sentences1: list[str], sentences2: list[str], scores: list[float], batch_size: int = 16, main_similarity: str | SimilarityFunction | None = None, similarity_fn_names: list[Literal['cosine', 'euclidean', 'manhattan', 'dot']] | None = None, name: str = '', show_progress_bar: bool = False, write_csv: bool = True, max_active_dims: int | None = None)[source]
此评估器扩展了
EmbeddingSimilarityEvaluator
,但专为稀疏编码器模型设计。通过计算嵌入相似度与黄金标准标签的 Spearman 和 Pearson 秩相关性来评估模型。指标包括余弦相似度以及欧几里得距离和曼哈顿距离。返回的分数是指定指标的 Spearman 相关性。
- 参数:
sentences1 (List[str]) – 对中第一个句子的列表。
sentences2 (List[str]) – 对中第二个句子的列表。
scores (List[float]) –
sentences1[i]
和sentences2[i]
之间的相似度分数。batch_size (int, optional) – 处理句子时使用的批处理大小。默认为 16。
main_similarity (Optional[Union[str, SimilarityFunction]], optional) – 要使用的主要相似度函数。可以是字符串(例如“cosine”、“dot”)或 SimilarityFunction 对象。默认为 None。
similarity_fn_names (List[str], optional) – 要使用的相似度函数名称列表。如果为 None,则使用模型的
similarity_fn_name
属性。默认为 None。name (str, optional) – 评估器的名称。默认为空字符串。
show_progress_bar (bool, optional) – 评估期间是否显示进度条。默认为 False。
write_csv (bool, optional) – 是否将评估结果写入 CSV 文件。默认为 True。
max_active_dims (Optional[int], optional) – 要使用的最大活跃维度数。
None
表示使用模型的当前max_active_dims
。默认为 None。
示例
import logging from datasets import load_dataset from sentence_transformers import SparseEncoder, SimilarityFunction from sentence_transformers.sparse_encoder.evaluation import SparseEmbeddingSimilarityEvaluator logging.basicConfig(format="%(message)s", level=logging.INFO) # Load a model model = SparseEncoder("naver/splade-cocondenser-ensembledistil") # Load the STSB dataset (https://hugging-face.cn/datasets/sentence-transformers/stsb) eval_dataset = load_dataset("sentence-transformers/stsb", split="validation") # Initialize the evaluator dev_evaluator = SparseEmbeddingSimilarityEvaluator( sentences1=eval_dataset["sentence1"], sentences2=eval_dataset["sentence2"], scores=eval_dataset["score"], main_similarity=SimilarityFunction.COSINE, # even though the model is trained with dot, we need to set it to cosine for evaluation as the score in the dataset is cosine similarity name="sts_dev", ) results = dev_evaluator(model) ''' EmbeddingSimilarityEvaluator: Evaluating the model on the sts_dev dataset: Cosine-Similarity: Pearson: 0.8429 Spearman: 0.8366 Model Sparsity: Active Dimensions: 78.3, Sparsity Ratio: 0.9974 ''' # Print the results print(f"Primary metric: {dev_evaluator.primary_metric}") # => Primary metric: sts_dev_spearman_cosine print(f"Primary metric value: {results[dev_evaluator.primary_metric]:.4f}") # => Primary metric value: 0.8366
SparseBinaryClassificationEvaluator
- class sentence_transformers.sparse_encoder.evaluation.SparseBinaryClassificationEvaluator(sentences1: list[str], sentences2: list[str], labels: list[int], name: str = '', batch_size: int = 32, show_progress_bar: bool = False, write_csv: bool = True, max_active_dims: int | None = None, similarity_fn_names: list[Literal['cosine', 'dot', 'euclidean', 'manhattan']] | None = None)[source]
此评估器扩展了
BinaryClassificationEvaluator
,但专为稀疏编码器模型设计。通过计算嵌入相似度来评估模型在识别相似和不相似句子方面的准确性。指标包括余弦相似度、点积、欧几里得距离和曼哈顿距离。返回的分数是指定指标的准确率。
结果写入 CSV 文件。如果 CSV 文件已存在,则会追加值。
对于不相似的对,标签需要为 0,对于相似的对,标签需要为 1。
- 参数:
sentences1 (List[str]) – 句子的第一列。
sentences2 (List[str]) – 句子的第二列。
labels (List[int]) –
labels[i]
是 (sentences1[i]
,sentences2[i]
) 对的标签。必须是 0 或 1。name (str, optional) – 输出名称。默认为空字符串。
batch_size (int, optional) – 用于计算嵌入的批处理大小。默认为 32。
show_progress_bar (bool, optional) – 如果为 True,则打印进度条。默认为 False。
write_csv (bool, optional) – 将结果写入 CSV 文件。默认为 True。
max_active_dims (Optional[int], optional) – 要使用的最大活跃维度数。
None
表示使用模型的当前max_active_dims
。默认为 None。similarity_fn_names (Optional[List[Literal["cosine", "dot", "euclidean", "manhattan"]]], optional) – 要使用的相似度函数。如果未指定,则默认为模型的
similarity_fn_name
属性。默认为 None。
示例
import logging from datasets import load_dataset from sentence_transformers import SparseEncoder from sentence_transformers.sparse_encoder.evaluation import SparseBinaryClassificationEvaluator logging.basicConfig(format="%(asctime)s - %(message)s", datefmt="%Y-%m-%d %H:%M:%S", level=logging.INFO) # Initialize the SPLADE model model = SparseEncoder("naver/splade-cocondenser-ensembledistil") # Load a dataset with two text columns and a class label column (https://hugging-face.cn/datasets/sentence-transformers/quora-duplicates) eval_dataset = load_dataset("sentence-transformers/quora-duplicates", "pair-class", split="train[-1000:]") # Initialize the evaluator binary_acc_evaluator = SparseBinaryClassificationEvaluator( sentences1=eval_dataset["sentence1"], sentences2=eval_dataset["sentence2"], labels=eval_dataset["label"], name="quora_duplicates_dev", show_progress_bar=True, similarity_fn_names=["cosine", "dot", "euclidean", "manhattan"], ) results = binary_acc_evaluator(model) ''' Accuracy with Cosine-Similarity: 75.00 (Threshold: 0.8668) F1 with Cosine-Similarity: 67.22 (Threshold: 0.5974) Precision with Cosine-Similarity: 54.18 Recall with Cosine-Similarity: 88.51 Average Precision with Cosine-Similarity: 67.81 Matthews Correlation with Cosine-Similarity: 49.56 Accuracy with Dot-Product: 76.50 (Threshold: 23.4236) F1 with Dot-Product: 67.00 (Threshold: 19.0095) Precision with Dot-Product: 55.93 Recall with Dot-Product: 83.54 Average Precision with Dot-Product: 65.89 Matthews Correlation with Dot-Product: 48.88 Accuracy with Euclidean-Distance: 67.70 (Threshold: -10.0041) F1 with Euclidean-Distance: 48.60 (Threshold: -0.1876) Precision with Euclidean-Distance: 32.13 Recall with Euclidean-Distance: 99.69 Average Precision with Euclidean-Distance: 20.52 Matthews Correlation with Euclidean-Distance: -4.59 Accuracy with Manhattan-Distance: 67.70 (Threshold: -103.0263) F1 with Manhattan-Distance: 48.60 (Threshold: -0.8532) Precision with Manhattan-Distance: 32.13 Recall with Manhattan-Distance: 99.69 Average Precision with Manhattan-Distance: 21.05 Matthews Correlation with Manhattan-Distance: -4.59 Model Sparsity: Active Dimensions: 61.2, Sparsity Ratio: 0.9980 ''' # Print the results print(f"Primary metric: {binary_acc_evaluator.primary_metric}") # => Primary metric: quora_duplicates_dev_max_ap print(f"Primary metric value: {results[binary_acc_evaluator.primary_metric]:.4f}") # => Primary metric value: 0.6781
SparseTripletEvaluator
- class sentence_transformers.sparse_encoder.evaluation.SparseTripletEvaluator(anchors: list[str], positives: list[str], negatives: list[str], main_similarity_function: str | SimilarityFunction | None = None, margin: float | dict[str, float] | None = None, name: str = '', batch_size: int = 16, show_progress_bar: bool = False, write_csv: bool = True, max_active_dims: int | None = None, similarity_fn_names: list[Literal['cosine', 'dot', 'euclidean', 'manhattan']] | None = None, main_distance_function: str | SimilarityFunction | None = 'deprecated')[source]
此评估器扩展了
TripletEvaluator
,但专为稀疏编码器模型设计。根据三元组评估模型:(句子、正例、负例)。检查
similarity(sentence, positive_example) < similarity(sentence, negative_example) + margin
是否成立。- 参数:
anchors (List[str]) – 要检查相似度的句子。(例如,一个查询)
positives (List[str]) – 正例句子列表
negatives (List[str]) – 负例句子列表
main_similarity_function (Union[str, SimilarityFunction], optional) – 要使用的相似度函数。如果未指定,则使用余弦相似度、点积、欧几里得和曼哈顿相似度。默认为 None。
margin (Union[float, Dict[str, float]], optional) – 各相似度指标的边距。如果提供浮点数,则将用作所有相似度指标的边距。如果提供字典,则键应为“cosine”、“dot”、“manhattan”和“euclidean”。值指定负样本与锚点相比,与正样本的最小边距。默认为 None。
name (str) – 输出名称。默认为空字符串。
batch_size (int) – 用于计算嵌入的批处理大小。默认为 16。
show_progress_bar (bool) – 如果为 True,则打印进度条。默认为 False。
write_csv (bool) – 将结果写入 CSV 文件。默认为 True。
max_active_dims (Optional[int], optional) – 要使用的最大活跃维度数。
None
表示使用模型的当前max_active_dims
。默认为 None。similarity_fn_names (List[str], optional) – 要评估的相似度函数名称列表。如果未指定,则使用
model.similarity_fn_name
进行评估。默认为 None。
示例
import logging from datasets import load_dataset from sentence_transformers import SparseEncoder from sentence_transformers.sparse_encoder.evaluation import SparseTripletEvaluator logging.basicConfig(format="%(message)s", level=logging.INFO) # Load a model model = SparseEncoder("naver/splade-cocondenser-ensembledistil") # Load triplets from the AllNLI dataset # The dataset contains triplets of (anchor, positive, negative) sentences dataset = load_dataset("sentence-transformers/all-nli", "triplet", split="dev[:1000]") # Initialize the SparseTripletEvaluator evaluator = SparseTripletEvaluator( anchors=dataset[:1000]["anchor"], positives=dataset[:1000]["positive"], negatives=dataset[:1000]["negative"], name="all_nli_dev", batch_size=32, show_progress_bar=True, ) # Run the evaluation results = evaluator(model) ''' TripletEvaluator: Evaluating the model on the all_nli_dev dataset: Accuracy Dot Similarity: 85.40% Model Anchor Sparsity: Active Dimensions: 103.0, Sparsity Ratio: 0.9966 Model Positive Sparsity: Active Dimensions: 67.4, Sparsity Ratio: 0.9978 Model Negative Sparsity: Active Dimensions: 65.9, Sparsity Ratio: 0.9978 ''' # Print the results print(f"Primary metric: {evaluator.primary_metric}") # => Primary metric: all_nli_dev_dot_accuracy print(f"Primary metric value: {results[evaluator.primary_metric]:.4f}") # => Primary metric value: 0.8540
SparseRerankingEvaluator
- class sentence_transformers.sparse_encoder.evaluation.SparseRerankingEvaluator(samples: list[dict[str, str | list[str]]], at_k: int = 10, name: str = '', write_csv: bool = True, similarity_fct: ~typing.Callable[[~torch.Tensor, ~torch.Tensor], ~torch.Tensor] = <function cos_sim>, batch_size: int = 64, show_progress_bar: bool = False, use_batched_encoding: bool = True, max_active_dims: int | None = None, mrr_at_k: int | None = None)[source]
此评估器扩展了 `~sentence_transformers.evaluation.RerankingEvaluator’ 类,但专为稀疏编码器模型设计。
该类评估 SparseEncoder 模型在重排任务中的表现。
给定一个查询和一个文档列表,它计算所有可能文档的得分 [查询, 文档_i],并按降序排序。然后,计算 MRR@10、NDCG@10 和 MAP 来衡量排名的质量。
- 参数:
samples (list) –
一个字典列表,其中每个字典表示一个样本,并具有以下键:
“query”:搜索查询。
“positive”:一个包含正向(相关)文档的列表。
“negative”:一个包含负向(不相关)文档的列表。
at_k (int, optional) – 评估时只考虑与每个查询最相似的前 k 个文档。默认为 10。
name (str, optional) – 评估器的名称。默认为空字符串。
write_csv (bool, optional) – 将结果写入 CSV 文件。默认为 True。
similarity_fct (Callable[[torch.Tensor, torch.Tensor], torch.Tensor], optional) – 句子嵌入之间的相似度函数。默认为余弦相似度。默认为 cos_sim。
batch_size (int, optional) – 计算句子嵌入的批处理大小。默认为 64。
show_progress_bar (bool, optional) – 计算嵌入时是否显示进度条。默认为 False。
use_batched_encoding (bool, optional) – 是否批量编码查询和文档以提高速度,或逐个编码以节省内存。默认为 True。
max_active_dims (Optional[int], optional) – 要使用的最大活跃维度数。
None
表示使用模型的当前max_active_dims
。默认为 None。mrr_at_k (Optional[int], optional) – 已弃用参数。请改用
at_k
。默认为 None。
示例
import logging from datasets import load_dataset from sentence_transformers import SparseEncoder from sentence_transformers.sparse_encoder.evaluation import SparseRerankingEvaluator logging.basicConfig(format="%(message)s", level=logging.INFO) # Load a model model = SparseEncoder("naver/splade-cocondenser-ensembledistil") # Load a dataset with queries, positives, and negatives eval_dataset = load_dataset("microsoft/ms_marco", "v1.1", split="validation").select(range(1000)) samples = [ { "query": sample["query"], "positive": [ text for is_selected, text in zip(sample["passages"]["is_selected"], sample["passages"]["passage_text"]) if is_selected ], "negative": [ text for is_selected, text in zip(sample["passages"]["is_selected"], sample["passages"]["passage_text"]) if not is_selected ], } for sample in eval_dataset ] # Now evaluate using only the documents from the 1000 samples reranking_evaluator = SparseRerankingEvaluator( samples=samples, name="ms-marco-dev-small", show_progress_bar=True, batch_size=32, ) results = reranking_evaluator(model) ''' RerankingEvaluator: Evaluating the model on the ms-marco-dev-small dataset: Queries: 967 Positives: Min 1.0, Mean 1.1, Max 3.0 Negatives: Min 1.0, Mean 7.1, Max 9.0 MAP: 53.41 MRR@10: 54.14 NDCG@10: 65.06 Model Query Sparsity: Active Dimensions: 42.2, Sparsity Ratio: 0.9986 Model Corpus Sparsity: Active Dimensions: 126.5, Sparsity Ratio: 0.9959 ''' # Print the results print(f"Primary metric: {reranking_evaluator.primary_metric}") # => Primary metric: ms-marco-dev-small_ndcg@10 print(f"Primary metric value: {results[reranking_evaluator.primary_metric]:.4f}") # => Primary metric value: 0.6506
SparseTranslationEvaluator
- class sentence_transformers.sparse_encoder.evaluation.SparseTranslationEvaluator(source_sentences: list[str], target_sentences: list[str], show_progress_bar: bool = False, batch_size: int = 16, name: str = '', print_wrong_matches: bool = False, write_csv: bool = True, max_active_dims: int | None = None)[source]
此评估器扩展了
TranslationEvaluator
,但专为稀疏编码器模型设计。给定两组不同语言的句子,例如(en_1、en_2、en_3……)和(fr_1、fr_2、fr_3……),并假设 fr_i 是 en_i 的翻译。检查 vec(en_i) 是否与 vec(fr_i) 具有最高的相似度。计算两个方向的准确性。
标签需要指示句子之间的相似度。
- 参数:
source_sentences (List[str]) – 源语言句子列表。
target_sentences (List[str]) – 目标语言句子列表。
show_progress_bar (bool) – 计算嵌入时是否显示进度条。默认为 False。
batch_size (int) – 计算句子嵌入的批处理大小。默认为 16。
name (str) – 评估器的名称。默认为空字符串。
print_wrong_matches (bool) – 是否打印不正确的匹配。默认为 False。
write_csv (bool) – 是否将评估结果写入 CSV 文件。默认为 True。
max_active_dims (Optional[int], optional) – 要使用的最大活跃维度数。
None
表示使用模型的当前max_active_dims
。默认为 None。
示例
import logging from datasets import load_dataset from sentence_transformers import SparseEncoder from sentence_transformers.sparse_encoder.evaluation import SparseTranslationEvaluator logging.basicConfig(format="%(message)s", level=logging.INFO) # Load a model, not mutilingual but hope to see some on the hub soon model = SparseEncoder("naver/splade-cocondenser-ensembledistil") # Load a parallel sentences dataset dataset = load_dataset("sentence-transformers/parallel-sentences-news-commentary", "en-nl", split="train[:1000]") # Initialize the TranslationEvaluator using the same texts from two languages translation_evaluator = SparseTranslationEvaluator( source_sentences=dataset["english"], target_sentences=dataset["non_english"], name="news-commentary-en-nl", ) results = translation_evaluator(model) ''' Evaluating translation matching Accuracy of the model on the news-commentary-en-nl dataset: Accuracy src2trg: 41.40 Accuracy trg2src: 47.60 Model Sparsity: Active Dimensions: 112.3, Sparsity Ratio: 0.9963 ''' # Print the results print(f"Primary metric: {translation_evaluator.primary_metric}") # => Primary metric: news-commentary-en-nl_mean_accuracy print(f"Primary metric value: {results[translation_evaluator.primary_metric]:.4f}") # => Primary metric value: 0.4450
SparseMSEEvaluator
- class sentence_transformers.sparse_encoder.evaluation.SparseMSEEvaluator(source_sentences: list[str], target_sentences: list[str], teacher_model=None, show_progress_bar: bool = False, batch_size: int = 32, name: str = '', write_csv: bool = True, max_active_dims: int | None = None)[source]
此评估器扩展了
MSEEvaluator
,但专为稀疏编码器模型设计。请注意,此评估器尚未使用稀疏张量 PyTorch 表示,因此可能会出现内存问题。
计算计算出的句子嵌入与某些目标句子嵌入之间的均方误差 (x100)。
MSE 在
||teacher.encode(source_sentences) - student.encode(target_sentences)||
之间计算。对于多语言知识蒸馏(https://arxiv.org/abs/2004.09813),源语言句子为英语,目标语言句子为德语、中文、西班牙语等不同语言。
- 参数:
source_sentences (List[str]) – 要用教师模型嵌入的源句子。
target_sentences (List[str]) – 要用学生模型嵌入的目标句子。
teacher_model (SparseEncoder, optional) – 用于计算源句子嵌入的教师模型。
show_progress_bar (bool, optional) – 计算嵌入时是否显示进度条。默认为 False。
batch_size (int, optional) – 计算句子嵌入的批处理大小。默认为 32。
name (str, optional) – 评估器的名称。默认为空字符串。
write_csv (bool, optional) – 将结果写入 CSV 文件。默认为 True。
max_active_dims (Optional[int], optional) – 要使用的最大活跃维度数。
None
表示使用模型的当前max_active_dims
。默认为 None。
示例
import logging from datasets import load_dataset from sentence_transformers import SparseEncoder from sentence_transformers.sparse_encoder.evaluation import SparseMSEEvaluator logging.basicConfig(format="%(message)s", level=logging.INFO) # Load a model student_model = SparseEncoder("prithivida/Splade_PP_en_v1") teacher_model = SparseEncoder("naver/splade-cocondenser-ensembledistil") # Load any dataset with some texts dataset = load_dataset("sentence-transformers/stsb", split="validation") sentences = dataset["sentence1"] + dataset["sentence2"] # Given queries, a corpus and a mapping with relevant documents, the SparseMSEEvaluator computes different MSE metrics. mse_evaluator = SparseMSEEvaluator( source_sentences=sentences, target_sentences=sentences, teacher_model=teacher_model, name="stsb-dev", ) results = mse_evaluator(student_model) ''' MSE evaluation (lower = better) on the stsb-dev dataset: MSE (*100): 0.034905 Model Sparsity: Active Dimensions: 54.6, Sparsity Ratio: 0.9982 ''' # Print the results print(f"Primary metric: {mse_evaluator.primary_metric}") # => Primary metric: stsb-dev_negative_mse print(f"Primary metric value: {results[mse_evaluator.primary_metric]:.4f}") # => Primary metric value: -0.0349
ReciprocalRankFusionEvaluator
- class sentence_transformers.sparse_encoder.evaluation.ReciprocalRankFusionEvaluator(dense_samples: list[dict[str, str | list[str]]], sparse_samples: list[dict[str, str | list[str]]], at_k: int = 10, rrf_k: int = 60, name: str = '', batch_size: int = 32, show_progress_bar: bool = False, write_csv: bool = True, write_predictions: bool = False)[source]
此类使用倒数排序融合(Reciprocal Rank Fusion, RRF)评估混合搜索方法。
给定一个查询和来自不同检索器(例如,稀疏和密集)的两个独立排序的文档列表,它使用 RRF 公式将它们合并,并计算 MRR@k、NDCG@k 和 MAP 等指标。
- 参数:
dense_samples (list) – 用于密集检索器结果的字典列表。每个字典应包含:- ‘query_id’:查询 ID - ‘query’:搜索查询文本 - ‘positive’:相关文档列表 - ‘documents’:所有文档(包括相关文档)的列表
sparse_samples (list) – 用于稀疏检索器结果的字典列表,格式与上述相同
at_k (int) – 仅考虑前 k 个文档进行评估。默认为 10。
rrf_k (int) – RRF 公式中的常数。默认为 60。
name (str) – 评估器的名称。默认为“”。
batch_size (int) – 用于评估的批处理大小。默认为 32。
show_progress_bar (bool) – 是否输出进度条。默认为 False。
write_csv (bool) – 是否将结果写入 CSV 文件。默认为 True。
write_predictions (bool) – 是否将融合后的预测结果写入 JSONL 文件。默认为 False。
示例
请参阅用法示例:Applications > Retrieve & Rerank