<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Day 5: Advanced APIs on Qdrant - Vector Search Engine</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/course/essentials/day-5/</link><description>Recent content in Day 5: Advanced APIs on Qdrant - Vector Search Engine</description><generator>Hugo</generator><language>en-us</language><managingEditor>info@qdrant.tech (Andrey Vasnetsov)</managingEditor><webMaster>info@qdrant.tech (Andrey Vasnetsov)</webMaster><atom:link href="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/course/essentials/day-5/index.xml" rel="self" type="application/rss+xml"/><item><title>Multivectors for Late Interaction Models</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/course/essentials/day-5/colbert-multivectors/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/course/essentials/day-5/colbert-multivectors/</guid><description>&lt;div class="date"&gt;
 &lt;img class="date-icon" src="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/icons/outline/date-blue.svg" alt="Calendar" /&gt; Day 5 
&lt;/div&gt;

&lt;h1 id="multivectors-for-late-interaction-models"&gt;Multivectors for Late Interaction Models&lt;/h1&gt;
&lt;div class="video"&gt;
&lt;iframe 
 src="https://www.youtube.com/embed/8ptlXSsSEPk?si=TzsWlastazBQPWWb"
 frameborder="0"
 allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
 referrerpolicy="strict-origin-when-cross-origin"
 allowfullscreen&gt;
&lt;/iframe&gt;
&lt;/div&gt;
&lt;br/&gt;
&lt;p&gt;Many embedding models represent data as a single vector. Transformer-based encoders achieve this by pooling the per-token vector matrix from the final layer into a single vector. That works great for most cases. But when your documents get more complex, cover multiple topics, or require context sensitivity, that one-size-fits-all compression starts to break down. You lose granularity and semantic alignment (though chunking and learned pooling mitigate this to an extent).&lt;/p&gt;</description></item><item><title>The Universal Query API</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/course/essentials/day-5/universal-query-api/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/course/essentials/day-5/universal-query-api/</guid><description>&lt;div class="date"&gt;
 &lt;img class="date-icon" src="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/icons/outline/date-blue.svg" alt="Calendar" /&gt; Day 5 
&lt;/div&gt;

&lt;h1 id="the-universal-query-api"&gt;The Universal Query API&lt;/h1&gt;
&lt;p&gt;Picture this: a customer types &amp;ldquo;leather jackets&amp;rdquo; into your store&amp;rsquo;s search bar. You want to show items that match the style semantically - so a bomber jacket surfaces even if it doesn&amp;rsquo;t mention &amp;ldquo;leather jackets&amp;rdquo; verbatim - but you also need to enforce your business rules. Only products under $200, only items in stock, only jackets released within the past year. Traditionally, you&amp;rsquo;d fire off a search, gather results, then apply filters and glue code. With Qdrant&amp;rsquo;s &lt;a href="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/search/hybrid-queries/"&gt;Universal Query API&lt;/a&gt;, all of that happens in one declarative request.&lt;/p&gt;</description></item><item><title>Demo: Universal Query for Hybrid Retrieval</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/course/essentials/day-5/universal-query-demo/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/course/essentials/day-5/universal-query-demo/</guid><description>&lt;div class="date"&gt;
 &lt;img class="date-icon" src="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/icons/outline/date-blue.svg" alt="Calendar" /&gt; Day 5 
&lt;/div&gt;

&lt;h1 id="demo-universal-query-for-hybrid-retrieval"&gt;Demo: Universal Query for Hybrid Retrieval&lt;/h1&gt;
&lt;p&gt;In this hands-on demo, we&amp;rsquo;ll build a research paper discovery system using the arXiv dataset that showcases the full power of Qdrant&amp;rsquo;s Universal Query API. You&amp;rsquo;ll see how to combine dense semantics, sparse keywords, and ColBERT reranking to help researchers find exactly the papers they need - all in a single query.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Follow along in Colab:&lt;/strong&gt; &lt;a href="https://colab.research.google.com/github/qdrant/examples/blob/master/course/day_5/universal-query-demo.ipynb"&gt;
&lt;img src="https://colab.research.google.com/assets/colab-badge.svg" style="display:inline; margin:0;" alt="Open In Colab"/&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="the-challenge-intelligent-research-discovery"&gt;The Challenge: Intelligent Research Discovery&lt;/h2&gt;
&lt;p&gt;Imagine you&amp;rsquo;re a machine learning researcher looking for &amp;ldquo;transformer architectures for multimodal learning with attention mechanisms.&amp;rdquo; You need to:&lt;/p&gt;</description></item><item><title>Project: Building a Recommendation System</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/course/essentials/day-5/pitstop-project/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/course/essentials/day-5/pitstop-project/</guid><description>&lt;div class="date"&gt;
 &lt;img class="date-icon" src="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/icons/outline/date-blue.svg" alt="Calendar" /&gt; Day 5 
&lt;/div&gt;

&lt;h1 id="project-building-a-recommendation-system"&gt;Project: Building a Recommendation System&lt;/h1&gt;
&lt;p&gt;Bring together dense, sparse, and multivectors in one atomic Universal Query. You&amp;rsquo;ll retrieve candidates, fuse signals, rerank with ColBERT, and apply business filters - in a single request.&lt;/p&gt;
&lt;h2 id="your-mission"&gt;Your Mission&lt;/h2&gt;
&lt;p&gt;Build a complete recommendation system using Qdrant’s Universal Query API with dense, sparse, and ColBERT multivectors in one request.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Estimated Time:&lt;/strong&gt; 90 minutes&lt;/p&gt;
&lt;h2 id="what-youll-build"&gt;What You&amp;rsquo;ll Build&lt;/h2&gt;
&lt;p&gt;A hybrid recommendation system using:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Multi-vector architecture&lt;/strong&gt; with dense, sparse, and ColBERT vectors&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Universal Query API&lt;/strong&gt; for atomic multi-stage search&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;RRF fusion&lt;/strong&gt; for combining candidates&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ColBERT reranking&lt;/strong&gt; for fine-grained relevance scoring&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Business rule filtering&lt;/strong&gt; at multiple pipeline stages&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Production-ready patterns&lt;/strong&gt; for recommendation systems&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="setup"&gt;Setup&lt;/h2&gt;
&lt;h3 id="prerequisites"&gt;Prerequisites&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Qdrant Cloud cluster (URL + API key)&lt;/li&gt;
&lt;li&gt;Python 3.9+ (or Google Colab)&lt;/li&gt;
&lt;li&gt;Packages: &lt;code&gt;qdrant-client&lt;/code&gt;, &lt;code&gt;fastembed&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="models"&gt;Models&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Dense&lt;/strong&gt;: &lt;code&gt;sentence-transformers/all-MiniLM-L6-v2&lt;/code&gt; (384-dim)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Sparse&lt;/strong&gt;: &lt;code&gt;prithivida/Splade_PP_en_v1&lt;/code&gt; (SPLADE)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Multivector&lt;/strong&gt;: &lt;code&gt;colbert-ir/colbertv2.0&lt;/code&gt; (128-dim tokens)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="dataset"&gt;Dataset&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Scope&lt;/strong&gt;: A small set of sample items (e.g., 10-20 movies).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Payload Fields&lt;/strong&gt;: &lt;code&gt;title&lt;/code&gt;, &lt;code&gt;description&lt;/code&gt;, &lt;code&gt;category&lt;/code&gt;, &lt;code&gt;genre&lt;/code&gt;, &lt;code&gt;year&lt;/code&gt;, &lt;code&gt;rating&lt;/code&gt;, &lt;code&gt;user_segment&lt;/code&gt;, &lt;code&gt;popularity_score&lt;/code&gt;, &lt;code&gt;release_date&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Filters Used&lt;/strong&gt;: &lt;code&gt;category&lt;/code&gt;, &lt;code&gt;user_segment&lt;/code&gt;, &lt;code&gt;release_date&lt;/code&gt;, &lt;code&gt;popularity_score&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="build-steps"&gt;Build Steps&lt;/h2&gt;
&lt;h3 id="step-1-set-up-the-hybrid-collection"&gt;Step 1: Set Up the Hybrid Collection&lt;/h3&gt;
&lt;h4 id="initialize-client-and-collection"&gt;Initialize Client and Collection&lt;/h4&gt;
&lt;p&gt;First, connect to Qdrant and create a clean collection for our recommendation system:&lt;/p&gt;</description></item></channel></rss>