retrieval systems

30 articles about retrieval systems in AI news

Differentiable Geometric Indexing: A Technical Breakthrough for Generative Retrieval Systems

New research introduces Differentiable Geometric Indexing (DGI), solving core optimization and geometric conflicts in generative retrieval. This enables end-to-end training that better surfaces long-tail items, validated on e-commerce datasets.

79% relevant

New Research Reveals Fundamental Limitations of Vector Embeddings for Retrieval

A new theoretical paper demonstrates that embedding-based retrieval systems have inherent limitations in representing complex relevance relationships, even with simple queries. This challenges the assumption that better training data alone can solve all retrieval problems.

97% relevant

Google's STATIC Framework Revolutionizes LLM Retrieval with 948x Speed Boost

Google AI's STATIC framework uses sparse matrix computation to accelerate constrained decoding in generative retrieval systems by up to 948x. This breakthrough enables LLMs to enforce business logic while maintaining real-time performance in recommendation systems.

75% relevant

Beyond RAG: How AI Memory Systems Are Creating Truly Adaptive Agents

AI development is shifting from static retrieval systems to dynamic memory architectures that enable continual learning. This evolution from RAG to agent memory represents a fundamental change in how AI systems accumulate and utilize knowledge over time.

85% relevant

New Diagnostic Tool Reveals Hidden Flaws in AI Ranking Systems

Researchers have developed a novel diagnostic method that isolates and analyzes LLM reranking behavior using fixed evidence pools. The study reveals surprising inconsistencies in how different AI models prioritize information, with implications for search engines and information retrieval systems.

72% relevant

From BM25 to Corrective RAG: A Benchmark Study Challenges the Dominance of Semantic Search for Tabular Data

A systematic benchmark of 10 RAG retrieval strategies on a financial QA dataset reveals that a two-stage hybrid + reranking pipeline performs best. Crucially, the classic BM25 algorithm outperformed modern dense retrieval models, challenging a core assumption in semantic search. The findings provide actionable, cost-aware guidance for building retrieval systems over heterogeneous documents.

82% relevant

A Systematic Study of Pseudo-Relevance Feedback with LLMs: Key Design Choices for Search

New research systematically analyzes how to best use LLMs for pseudo-relevance feedback in search, finding that the method for using feedback is critical and that LLM-generated text can be a cost-effective feedback source. This provides clear guidance for improving retrieval systems.

84% relevant

Memory Sparse Attention (MSA) Enables 100M Token Context Windows with Minimal Performance Loss

Memory Sparse Attention (MSA) is a proposed architecture that allows AI models to store and reason over massive long-term memory directly within their attention mechanism, eliminating the need for external retrieval systems. The approach reportedly enables context windows of up to 100 million tokens with minimal performance degradation.

85% relevant

Beyond Simple Retrieval: The Rise of Agentic RAG Systems That Think for Themselves

Traditional RAG systems are evolving into 'agentic' architectures where AI agents actively control the retrieval process. A new 5-layer evaluation framework helps developers measure when these intelligent pipelines make better decisions than static systems.

81% relevant

GRank: A New Target-Aware, Index-Free Retrieval Paradigm for Billion-Scale Recommender Systems

A new paper introduces GRank, a structured-index-free retrieval framework that unifies target-aware candidate generation with fine-grained ranking. It significantly outperforms tree- and graph-based methods on recall and latency, and is already deployed at massive scale.

83% relevant

The Multimodal Retrieval Gap: New Benchmark Exposes Critical Weakness in AI Systems

Researchers introduce MultiHaystack, a benchmark revealing that multimodal AI models struggle significantly when required to retrieve evidence from large, mixed-media collections before reasoning. While models perform well when given correct evidence, their accuracy plummets when they must first locate it across 46,000+ documents, images, and videos.

80% relevant

8 RAG Architectures Explained for AI Engineers: From Naive to Agentic Retrieval

A technical thread explains eight distinct RAG architectures with specific use cases, from basic vector similarity to complex agentic systems. This provides a practical framework for engineers choosing the right approach for different retrieval tasks.

85% relevant

Nemotron ColEmbed V2: NVIDIA's New SOTA Embedding Models for Visual Document Retrieval

NVIDIA researchers have released Nemotron ColEmbed V2, a family of three models (3B, 4B, 8B parameters) that set new state-of-the-art performance on the ViDoRe benchmark for visual document retrieval. The models use a 'late interaction' mechanism and are built on top of pre-trained VLMs like Qwen3-VL and NVIDIA's own Eagle 2. This matters because it directly addresses the challenge of retrieving information from visually rich documents like PDFs and slides within RAG systems.

74% relevant

Late Interaction Retrieval Models Show Length Bias, MaxSim Operator Efficiency Confirmed in New Study

New arXiv research analyzes two dynamics in Late Interaction retrieval models: a documented length bias in scoring and the efficiency of the MaxSim operator. Findings validate theoretical concerns and confirm the pooling method's effectiveness, with implications for high-precision search systems.

72% relevant

VMLOps Publishes Comprehensive RAG Techniques Catalog: 34 Methods for Retrieval-Augmented Generation

VMLOps has released a structured catalog documenting 34 distinct techniques for improving Retrieval-Augmented Generation (RAG) systems. The resource provides practitioners with a systematic reference for optimizing retrieval, generation, and hybrid pipelines.

85% relevant

AI Efficiency Breakthrough: New Framework Optimizes Agentic RAG Systems Under Budget Constraints

Researchers have developed a systematic framework for optimizing agentic RAG systems under budget constraints. Their study reveals that hybrid retrieval strategies and limited search iterations deliver maximum accuracy with minimal costs, providing practical guidance for real-world AI deployment.

79% relevant

Democratizing AI: How Open-Source RAG Systems Are Revolutionizing Enterprise Incident Analysis

A new guide demonstrates how to build production-ready Retrieval-Augmented Generation systems using completely free, local tools. This approach enables organizations to analyze incidents and leverage historical data without costly API dependencies, making advanced AI accessible to all.

70% relevant

Goal-Aligned Recommendation Systems: Lessons from Return-Aligned Decision Transformer

The article discusses Return-Aligned Decision Transformer (RADT), a method that aligns recommender systems with long-term business returns. It addresses the common problem where models ignore target signals, offering a framework for transaction-driven recommendations.

78% relevant

FGR-ColBERT: A New Retrieval Model That Pinpoints Relevant Text Spans Efficiently

A new arXiv paper introduces FGR-ColBERT, a modified ColBERT retrieval model that integrates fine-grained relevance signals distilled from an LLM. It achieves high token-level accuracy while preserving retrieval efficiency, offering a practical alternative to post-retrieval LLM analysis.

72% relevant

Storing Less, Finding More: Novelty Filtering Architecture for Cross-Modal Retrieval on Edge Cameras

A new streaming retrieval architecture uses an on-device 'epsilon-net' filter to retain only semantically novel video frames, dramatically improving cross-modal search accuracy while reducing power consumption to 2.7 mW. This addresses the fundamental problem of redundant frames crowding out correct results in continuous video streams.

82% relevant

Meta's QTT Method Fixes Long-Context LLM 'Buried Facts' Problem, Boosts Retrieval Accuracy

Meta researchers identified a failure mode where LLMs with 128K+ context windows miss information buried in the middle of documents. Their Query-only Test-Time Training (QTT) method adapts models at inference, significantly improving retrieval accuracy.

85% relevant

Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets?

New research warns that RAG systems can be gamed to achieve near-perfect evaluation scores if they have access to the evaluation criteria, creating a risk of mistaking metric overfitting for genuine progress. This highlights a critical vulnerability in the dominant LLM-judge evaluation paradigm.

78% relevant

New Research Proposes FilterRAG and ML-FilterRAG to Defend Against Knowledge Poisoning Attacks in RAG Systems

Researchers propose two novel defense methods, FilterRAG and ML-FilterRAG, to mitigate 'PoisonedRAG' attacks where adversaries inject malicious texts into a knowledge source to manipulate an LLM's output. The defenses identify and filter adversarial content, maintaining performance close to clean RAG systems.

92% relevant

New Benchmark and Methods Target Few-Shot Text-to-Image Retrieval for Complex Queries

Researchers introduce FSIR-BD, a benchmark for few-shot text-to-image retrieval, and two optimization methods to improve performance on compositional and out-of-distribution queries. This addresses a key weakness in pre-trained vision-language models.

86% relevant

Sparton: A New GPU Kernel Dramatically Speeds Up Learned Sparse Retrieval

Researchers propose Sparton, a fused Triton GPU kernel for Learned Sparse Retrieval models like Splade. It avoids materializing a massive vocabulary-sized matrix, achieving up to 4.8x speedups and 26x larger batch sizes. This is a core infrastructure breakthrough for efficient AI-powered search.

72% relevant

ColBERT-Att: New Research Enhances Neural Retrieval by Integrating Attention into Late Interaction

Researchers propose ColBERT-Att, a novel neural information retrieval model that integrates attention weights into the late-interaction framework. The method shows improved recall accuracy on standard benchmarks like MS-MARCO, BEIR, and LoTTE.

86% relevant

Mistral Forge Targets RAG, Sparking Debate on Custom Models vs. Retrieval

Mistral AI's new 'Forge' platform reportedly focuses on custom model creation, challenging the prevailing RAG paradigm. This reignites the strategic debate between fine-tuning and retrieval-augmented generation for enterprise AI.

100% relevant

flexvec: A New SQL Kernel for Programmable Vector Retrieval

A new research paper introduces flexvec, a retrieval kernel that exposes the embedding matrix and score array as a programmable surface via SQL, enabling complex, real-time query-time operations called Programmatic Embedding Modulation (PEM). This approach allows AI agents to dynamically manipulate retrieval logic and achieves sub-100ms performance on million-scale corpora on a CPU.

76% relevant

ReBOL: A New AI Retrieval Method Combines Bayesian Optimization with LLMs to Improve Search

Researchers propose ReBOL, a retrieval method using Bayesian Optimization and LLM relevance scoring. It outperforms standard LLM rerankers on recall, achieving 46.5% vs. 35.0% recall@100 on one dataset, with comparable latency. This is a technical advance in information retrieval.

76% relevant

Controllable Evidence Selection in Retrieval-Augmented Question Answering via Deterministic Utility Gating

A new arXiv paper introduces a deterministic framework for selecting evidence in QA systems. It uses fixed scoring rules (MUE & DUE) to filter retrieved text, ensuring only independently sufficient facts are used. This creates auditable, compact evidence sets without model training.

70% relevant