vllm.transformers_utils.processors.kimi_k25 ¶
KimiK25Processor ¶
Bases: ProcessorMixin
Source code in vllm/transformers_utils/processors/kimi_k25.py
__call__ ¶
__call__(
text: str | list[str] | None = None,
vision_chunks: list[VisionChunk] | None = None,
return_tensors: str | TensorType | None = None,
**kwargs,
) -> BatchFeature
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
text | str | list[str] | None | The text to be field to the model. | None |
vision_chunks | list[VisionChunk] | None | List of | None |
Returns: [BatchFeature]: A [BatchFeature] with the following fields:
- **input_ids** -- list of token ids to be fed to a model.
- **pixel_values** -- Pixel values to be fed to a model.
Returned when `vision_chunks` is not `None`.
- **grid_thws** -- list of image 3D grid in LLM.
Returned when `vision_chunks` is not `None`.