Join this workshop to learn how to connect text to image search using the OpenAI's CLIP model and the OpenSearch® search engine.
Join this workshop to learn how to connect text to image search using the OpenAI's CLIP model and the OpenSearch® search engine. We'll use a multi-modal vision and language model named CLIP. This model is special in the way that it can work with both images and text interchangeably, producing embeddings for input data that is either a text snippet or an image. We'll guide you step by step to build a system to find relevant photos using Python, OpenSearch and an AI model.
Related resource in our developer center: Image recognition with Python, OpenCV, OpenAI CLIP and pgvector
You’ll also need
We will lead you through setting that up in the workshop, if you don’t already have one.
Live and interactive sessions to upgrade your skills with expert guidance covering a range of open source technologies.
Explore all workshops