redis-vl-dotnet
Hugging Face Vectorizer
RedisVL.Vectorizers.HuggingFace provides HuggingFaceTextVectorizer, a batch-capable IBatchTextVectorizer backed by the Hugging Face hf-inference feature-extraction API.
Package contents
-
HuggingFaceTextVectorizerfor single-input and batch embedding requests -
HuggingFaceVectorizerOptionsfor request shaping -
HuggingFaceTruncationDirectionfor truncation-direction selection -
HuggingFaceVectorizerPackageas the package marker type
Request options
HuggingFaceVectorizerOptions currently supports:
-
Normalizeto request normalized embeddings from the provider -
PromptNameto select a named prompt when the model supports it -
TruncateandTruncationDirectionto control overflow handling -
EndpointOverridewhen the application needs a non-default inference endpoint
The default provider endpoint is https://router.huggingface.co/hf-inference/models/<model>.
Required environment variables
The runnable sample reads:
-
HF_TOKENas the required provider credential -
HF_EMBEDDING_MODELas an optional model override -
REDIS_VL_REDIS_URLas the Redis connection string override
If HF_TOKEN is missing, /examples/HuggingFaceVectorizerExample throws an InvalidOperationException before it calls the Hugging Face API.
Example workflow
/examples/HuggingFaceVectorizerExample uses Hugging Face embeddings with SemanticCache:
-
create the vectorizer with environment-based credentials
-
probe one embedding first so the cache schema can use the live provider dimension
-
batch-generate seed embeddings for stored prompts
-
vectorize a new prompt and check for a semantic cache hit
-
drop the example index and documents during cleanup
Run it from the repository root:
dotnet run --project examples/HuggingFaceVectorizerExample/HuggingFaceVectorizerExample.csproj