如何通过Python SDK在Collection中进行相似性检索

本文介绍如何通过Python SDK在Collection中按分组进行相似性检索。

前提条件

  • 已创建Cluster
  • 已获得API-KEY
  • 已安装最新版SDK

接口定义

Python示例:

Collection.query_group_by(         self,         vector: Optional[Union[List[Union[int, float]], np.ndarray]] = None,         *,         group_by_field: str,         group_count: int = 10,         group_topk: int = 10,         id: Optional[str] = None,         filter: Optional[str] = None,         include_vector: bool = False,         partition: Optional[str] = None,         output_fields: Optional[List[str]] = None,         sparse_vector: Optional[Dict[int, float]] = None,         async_req: bool = False,     ) -> DashVectorResponse: 

使用示例

说明

需要使用您的api-key替换示例中的YOUR_API_KEY、您的Cluster Endpoint替换示例中的YOUR_CLUSTER_ENDPOINT,代码才能正常运行。

Python示例:

import dashvector import numpy as np  client = dashvector.Client(     api_key='YOUR_API_KEY',     endpoint='YOUR_CLUSTER_ENDPOINT' ) ret = client.create(     name='group_by_demo',     dimension=4,     fields_schema={'document_id': str, 'chunk_id': int} ) assert ret  collection = client.get(name='group_by_demo')  ret = collection.insert([     ('1', np.random.rand(4), {'document_id': 'paper-01', 'chunk_id': 1, 'content': 'xxxA'}),     ('2', np.random.rand(4), {'document_id': 'paper-01', 'chunk_id': 2, 'content': 'xxxB'}),     ('3', np.random.rand(4), {'document_id': 'paper-02', 'chunk_id': 1, 'content': 'xxxC'}),     ('4', np.random.rand(4), {'document_id': 'paper-02', 'chunk_id': 2, 'content': 'xxxD'}),     ('5', np.random.rand(4), {'document_id': 'paper-02', 'chunk_id': 3, 'content': 'xxxE'}),     ('6', np.random.rand(4), {'document_id': 'paper-03', 'chunk_id': 1, 'content': 'xxxF'}), ]) assert ret 

根据向量进行分组相似性检索

Python示例:

ret = collection.query_group_by(     vector=[0.1, 0.2, 0.3, 0.4],     group_by_field='document_id',  # 按document_id字段的值分组     group_count=2,  # 返回2个分组     group_topk=2,   # 每个分组最多返回2个doc ) # 判断是否成功 if ret:     print('query_group_by success')     print(len(ret))     print('------------------------')     for group in ret:         print('group key:', group.group_id)         for doc in group.docs:             prefix = ' -'             print(prefix, doc) 

参考输出如下

query_group_by success 4 ------------------------ group key: paper-01  - {"id": "2", "fields": {"document_id": "paper-01", "chunk_id": 2, "content": "xxxB"}, "score": 0.6807}  - {"id": "1", "fields": {"document_id": "paper-01", "chunk_id": 1, "content": "xxxA"}, "score": 0.4289} group key: paper-02  - {"id": "3", "fields": {"document_id": "paper-02", "chunk_id": 1, "content": "xxxC"}, "score": 0.6553}  - {"id": "5", "fields": {"document_id": "paper-02", "chunk_id": 3, "content": "xxxE"}, "score": 0.4401} 

根据主键对应的向量进行分组相似性检索

Python示例:

ret = collection.query_group_by(     id='1',     group_by_field='name', ) # 判断query接口是否成功 if ret:     print('query_group_by success')     print(len(ret))     for group in ret:         print('group:', group.group_id)         for doc in group.docs:             print(doc)             print(doc.id)             print(doc.vector)             print(doc.fields) 

带过滤条件的分组相似性检索

Python示例:

# 根据向量或者主键进行分组相似性检索 + 条件过滤 ret = collection.query_group_by(     vector=[0.1, 0.2, 0.3, 0.4],   # 向量检索,也可设置主键检索     group_by_field='name',     filter='age > 18',             # 条件过滤,仅对age > 18的Doc进行相似性检索     output_fields=['name', 'age'], # 仅返回name、age这2个Field     include_vector=True ) 

带有Sparse Vector的分组向量检索

Python示例:

# 根据向量进行分组相似性检索 + 稀疏向量 ret = collection.query_group_by(     vector=[0.1, 0.2, 0.3, 0.4],   # 向量检索     sparse_vector={1: 0.3, 20: 0.7},     group_by_field='name', ) 

发表评论

评论已关闭。

相关文章

当前内容话题