<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Essential Examples on Qdrant - Vector Search Engine</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/</link><description>Recent content in Essential Examples on Qdrant - Vector Search Engine</description><generator>Hugo</generator><language>en-us</language><managingEditor>info@qdrant.tech (Andrey Vasnetsov)</managingEditor><webMaster>info@qdrant.tech (Andrey Vasnetsov)</webMaster><atom:link href="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/index.xml" rel="self" type="application/rss+xml"/><item><title>Agentic RAG with CrewAI</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/agentic-rag-crewai-zoom/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/agentic-rag-crewai-zoom/</guid><description>&lt;!-- ![agentic-rag-crewai-zoom](/documentation/examples/agentic-rag-crewai-zoom/agentic-rag-1.png) --&gt;
&lt;h1 id="qdrant-agentic-rag-system-with-crewai"&gt;Qdrant Agentic RAG System with CrewAI&lt;/h1&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Time: 45 min&lt;/th&gt;
 &lt;th&gt;Level: Beginner&lt;/th&gt;
 &lt;th&gt;Output: &lt;a href="https://github.com/qdrant/examples/tree/master/agentic_rag_zoom_crewai" target="_blank" rel="noopener nofollow"&gt;GitHub&lt;/a&gt;&lt;/th&gt;
 &lt;th&gt;&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;By combining the power of Qdrant for vector search and CrewAI for orchestrating modular agents, you can build systems that don&amp;rsquo;t just answer questions but analyze, interpret, and act.&lt;/p&gt;
&lt;p&gt;Traditional RAG systems focus on fetching data and generating responses, but they lack the ability to reason deeply or handle multi-step processes.&lt;/p&gt;
&lt;p&gt;In this tutorial, we&amp;rsquo;ll walk you through building an Agentic RAG system step by step. By the end, you&amp;rsquo;ll have a working framework for storing data in a Qdrant Vector Database and extracting insights using CrewAI agents in conjunction with Vector Search over your data.&lt;/p&gt;</description></item><item><title>S3 Ingestion with LangChain</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/data-ingestion-beginners/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/data-ingestion-beginners/</guid><description>&lt;!-- ![data-ingestion-beginners-7](/documentation/examples/data-ingestion-beginners/data-ingestion-7.png) --&gt;
&lt;h1 id="s3-ingestion-with-langchain-and-qdrant"&gt;S3 Ingestion with LangChain and Qdrant&lt;/h1&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Time: 30 min&lt;/th&gt;
 &lt;th&gt;Level: Beginner&lt;/th&gt;
 &lt;th&gt;&lt;/th&gt;
 &lt;th&gt;&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;Data ingestion into a vector store&lt;/strong&gt; is essential for building effective search and retrieval algorithms, especially since nearly 80% of data is unstructured, lacking any predefined format.&lt;/p&gt;
&lt;p&gt;In this tutorial, we’ll create a streamlined data ingestion pipeline, pulling data directly from &lt;strong&gt;AWS S3&lt;/strong&gt; and feeding it into Qdrant. We’ll dive into vector embeddings, transforming unstructured data into a format that allows you to search documents semantically. Prepare to discover new ways to uncover insights hidden within unstructured data!&lt;/p&gt;</description></item><item><title>Agentic RAG with LangGraph</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/agentic-rag-langgraph/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/agentic-rag-langgraph/</guid><description>&lt;h1 id="agentic-rag-with-langgraph-and-qdrant"&gt;Agentic RAG with LangGraph and Qdrant&lt;/h1&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Time: 45 min&lt;/th&gt;
 &lt;th&gt;Level: Intermediate&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Traditional Retrieval-Augmented Generation (RAG) systems follow a straightforward path: query → retrieve → generate. Sure, this works well for many scenarios. But let’s face it—this linear approach often struggles when you&amp;rsquo;re dealing with complex queries that demand multiple steps or pulling together diverse types of information.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://qdrant.tech/articles/agentic-rag/" target="_blank" rel="noopener nofollow"&gt;Agentic RAG&lt;/a&gt; takes things up a notch by introducing AI agents that can orchestrate multiple retrieval steps and smartly decide how to gather and use the information you need. Think of it this way: in an Agentic RAG workflow, RAG becomes just one powerful tool in a much bigger and more versatile toolkit.&lt;/p&gt;</description></item><item><title>Discord RAG Bot</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/agentic-rag-camelai-discord/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/agentic-rag-camelai-discord/</guid><description>&lt;!-- ![agentic-rag-camelai-astronaut](/documentation/examples/agentic-rag-camelai-discord/astronaut-main.png) --&gt;
&lt;h1 id="qdrant-agentic-rag-discord-bot-with-camel-ai-and-openai"&gt;Qdrant Agentic RAG Discord Bot with CAMEL-AI and OpenAI&lt;/h1&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Time: 45 min&lt;/th&gt;
 &lt;th&gt;Level: Intermediate&lt;/th&gt;
 &lt;th&gt;&lt;a href="https://colab.research.google.com/drive/1Ymqzm6ySoyVOekY7fteQBCFCXYiYyHxw#scrollTo=QQZXwzqmNfaS" target="_blank" rel="noopener nofollow"&gt;&lt;img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"&gt;&lt;/a&gt;&lt;/th&gt;
 &lt;th&gt;&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Unlike traditional RAG techniques, which passively retrieve context and generate responses, &lt;strong&gt;agentic RAG&lt;/strong&gt; involves active decision-making and multi-step reasoning by the chatbot. Instead of just fetching data, the chatbot makes decisions, dynamically interacts with various data sources, and adapts based on context, giving it a much more dynamic and intelligent approach.&lt;/p&gt;</description></item><item><title>Multimodal and Multilingual RAG</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/multimodal-search/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/multimodal-search/</guid><description>&lt;h1 id="multimodal-and-multilingual-rag-with-llamaindex-and-qdrant"&gt;Multimodal and Multilingual RAG with LlamaIndex and Qdrant&lt;/h1&gt;
&lt;!-- ![Snow prints](/documentation/examples/multimodal-search/image-1.png) --&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Time: 15 min&lt;/th&gt;
 &lt;th&gt;Level: Beginner&lt;/th&gt;
 &lt;th&gt;Output: &lt;a href="https://github.com/qdrant/examples/blob/master/multimodal-search/Multimodal_Search_with_LlamaIndex.ipynb" target="_blank" rel="noopener nofollow"&gt;GitHub&lt;/a&gt;&lt;/th&gt;
 &lt;th&gt;&lt;a href="https://githubtocolab.com/qdrant/examples/blob/master/multimodal-search/Multimodal_Search_with_LlamaIndex.ipynb" target="_blank" rel="noopener nofollow"&gt;&lt;img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"&gt;&lt;/a&gt;&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;We often understand and share information more effectively when combining different types of data. For example, the taste of comfort food can trigger childhood memories. We might describe a song with just “pam pam clap” sounds. Instead of writing paragraphs. Sometimes, we may use emojis and stickers to express how we feel or to share complex ideas.&lt;/p&gt;</description></item><item><title>5-Minute RAG with DeepSeek</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/rag-deepseek/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/rag-deepseek/</guid><description>&lt;!-- ![deepseek-rag-qdrant](/documentation/examples/rag-deepseek/deepseek.png) --&gt;
&lt;h1 id="rag-in-5-minutes-with-deepseek-and-qdrant"&gt;RAG in 5 Minutes with DeepSeek and Qdrant&lt;/h1&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Time: 5 min&lt;/th&gt;
 &lt;th&gt;Level: Beginner&lt;/th&gt;
 &lt;th&gt;Output: &lt;a href="https://github.com/qdrant/examples/blob/master/rag-with-qdrant-deepseek/deepseek-qdrant.ipynb" target="_blank" rel="noopener nofollow"&gt;GitHub&lt;/a&gt;&lt;/th&gt;
 &lt;th&gt;&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;This tutorial demonstrates how to build a &lt;strong&gt;Retrieval-Augmented Generation (RAG)&lt;/strong&gt; pipeline using Qdrant as a vector storage solution and DeepSeek for semantic query enrichment. RAG pipelines enhance Large Language Model (LLM) responses by providing contextually relevant data.&lt;/p&gt;
&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;In this tutorial, we will:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Take sample text and turn it into vectors with FastEmbed.&lt;/li&gt;
&lt;li&gt;Send the vectors to a Qdrant collection.&lt;/li&gt;
&lt;li&gt;Connect Qdrant and DeepSeek into a minimal RAG pipeline.&lt;/li&gt;
&lt;li&gt;Ask DeepSeek different questions and test answer accuracy.&lt;/li&gt;
&lt;li&gt;Enrich DeepSeek prompts with content retrieved from Qdrant.&lt;/li&gt;
&lt;li&gt;Evaluate answer accuracy before and after.&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id="architecture"&gt;Architecture:&lt;/h4&gt;
&lt;p&gt;&lt;img src="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/examples/rag-deepseek/architecture.png" alt="deepseek-rag-architecture"&gt;&lt;/p&gt;</description></item><item><title>n8n Workflow Automation</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/qdrant-n8n/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/qdrant-n8n/</guid><description>&lt;!-- ![n8n-qdrant](/documentation/examples/qdrant-n8n-2/cover.png) --&gt;
&lt;h1 id="automate-qdrant-workflows-with-n8n"&gt;Automate Qdrant Workflows with n8n&lt;/h1&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Time: 45 min&lt;/th&gt;
 &lt;th&gt;Level: Intermediate&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;This tutorial shows how to combine Qdrant with &lt;a href="https://n8n.io/" target="_blank" rel="noopener nofollow"&gt;n8n&lt;/a&gt; low-code automation platform to cover &lt;strong&gt;use cases beyond basic Retrieval-Augmented Generation (RAG)&lt;/strong&gt;. You&amp;rsquo;ll learn how to use vector search for &lt;strong&gt;recommendations&lt;/strong&gt; and &lt;strong&gt;unstructured big data analysis&lt;/strong&gt;.&lt;/p&gt;
&lt;aside role="status"&gt;
 Since this tutorial was created, &lt;a href="https://qdrant.tech/documentation/platforms/n8n/"&gt;an official Qdrant node for n8n&lt;/a&gt; has been released. It simplifies workflows and replaces the HTTP request nodes used in the examples below. Watch &lt;a href="https://youtu.be/sYP_kHWptHY"&gt; a quick video introduction&lt;/a&gt; to it.
&lt;/aside&gt;
&lt;h2 id="setting-up-qdrant-in-n8n"&gt;Setting Up Qdrant in n8n&lt;/h2&gt;
&lt;p&gt;To start using Qdrant with n8n, you need to provide your Qdrant instance credentials in the &lt;a href="https://docs.n8n.io/integrations/builtin/credentials/qdrant/#using-api-key" target="_blank" rel="noopener nofollow"&gt;credentials&lt;/a&gt; tab. Select &lt;code&gt;QdrantApi&lt;/code&gt; from the list.&lt;/p&gt;</description></item><item><title>Video Anomaly Detection Part 1: Architecture, Twelve Labs, and NVIDIA VSS</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/video-anomaly-edge-part-1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/video-anomaly-edge-part-1/</guid><description>&lt;h1 id="video-anomaly-detection-architecture-twelve-labs-and-nvidia-vss"&gt;Video Anomaly Detection: Architecture, Twelve Labs, and NVIDIA VSS&lt;/h1&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Time: 90 min&lt;/th&gt;
 &lt;th&gt;Level: Advanced&lt;/th&gt;
 &lt;th&gt;Output: &lt;a href="https://github.com/qdrant/video-anomaly-edge" target="_blank" rel="noopener nofollow"&gt;GitHub&lt;/a&gt;&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;em&gt;This is Part 1 of a 3-part series on building real-time video anomaly detection from edge to cloud. We&amp;rsquo;ll go from architecture and integrations to a production-grade detection pipeline.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Series:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Part 1 | Architecture, Twelve Labs, and NVIDIA VSS (here)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/video-anomaly-edge-part-2/"&gt;Part 2 | Edge-to-Cloud Pipeline&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/video-anomaly-edge-part-3/"&gt;Part 3 | Scoring, Governance, and Deployment&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;In this tutorial, you will learn how to build a real-time video anomaly detection system that monitors live surveillance cameras across multiple sites, automatically detecting unusual events without training on specific anomaly types. You&amp;rsquo;ll see how Qdrant Edge integrates with Twelve Labs and NVIDIA Metropolis VSS to create a production-grade edge-to-cloud detection pipeline deployed on Vultr Cloud GPUs.&lt;/p&gt;</description></item><item><title>Video Anomaly Detection Part 2: Edge-to-Cloud Pipeline</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/video-anomaly-edge-part-2/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/video-anomaly-edge-part-2/</guid><description>&lt;h1 id="video-anomaly-detection-edge-to-cloud-pipeline"&gt;Video Anomaly Detection: Edge-to-Cloud Pipeline&lt;/h1&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Time: 90 min&lt;/th&gt;
 &lt;th&gt;Level: Advanced&lt;/th&gt;
 &lt;th&gt;Output: &lt;a href="https://github.com/qdrant/video-anomaly-edge" target="_blank" rel="noopener nofollow"&gt;GitHub&lt;/a&gt;&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;em&gt;This is Part 2 of a 3-part series on building real-time video anomaly detection from edge to cloud.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Series:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/video-anomaly-edge-part-1/"&gt;Part 1 | Architecture, Twelve Labs, and NVIDIA VSS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Part 2 | Edge-to-Cloud Pipeline (here)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/video-anomaly-edge-part-3/"&gt;Part 3 | Scoring, Governance, and Deployment&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;In &lt;a href="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/video-anomaly-edge-part-1/"&gt;Part 1&lt;/a&gt;, we set up the project, covered why kNN anomaly detection in Qdrant outperforms classifiers, integrated Twelve Labs for video embeddings and Q&amp;amp;A, and connected NVIDIA VSS. Now we build the edge.&lt;/p&gt;</description></item><item><title>Video Anomaly Detection Part 3: Scoring, Governance, and Deployment</title><link>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/video-anomaly-edge-part-3/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/video-anomaly-edge-part-3/</guid><description>&lt;h1 id="video-anomaly-detection-scoring-governance-and-deployment"&gt;Video Anomaly Detection: Scoring, Governance, and Deployment&lt;/h1&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Time: 90 min&lt;/th&gt;
 &lt;th&gt;Level: Advanced&lt;/th&gt;
 &lt;th&gt;Output: &lt;a href="https://github.com/qdrant/video-anomaly-edge" target="_blank" rel="noopener nofollow"&gt;GitHub&lt;/a&gt;&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;em&gt;This is Part 3 of a 3-part series on building real-time video anomaly detection from edge to cloud.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Series:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/video-anomaly-edge-part-1/"&gt;Part 1 | Architecture, Twelve Labs, and NVIDIA VSS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/video-anomaly-edge-part-2/"&gt;Part 2 | Edge-to-Cloud Pipeline&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Part 3 | Scoring, Governance, and Deployment (here)&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;In &lt;a href="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/video-anomaly-edge-part-1/"&gt;Part 1&lt;/a&gt;, we set up the architecture, Twelve Labs integration, and NVIDIA VSS connection. In &lt;a href="https://deploy-preview-2342--condescending-goldwasser-91acf0.netlify.app/documentation/tutorials-build-essentials/video-anomaly-edge-part-2/"&gt;Part 2&lt;/a&gt;, we built Qdrant Edge&amp;rsquo;s two-shard architecture and the escalation pipeline. Now we turn raw scores into incidents, protect the baseline, and deploy.&lt;/p&gt;</description></item></channel></rss>