temporal modeling
30 articles about temporal modeling in AI news
New Research Proposes Stage-Wise Framework for Modeling Evolving User Interests in Recommendation Systems
arXiv paper introduces a unified neural framework that models both long-term preferences and short-term, stage-wise interest evolution for time-sensitive recommendations. Outperforms baselines on real-world datasets by capturing temporal dynamics more effectively.
Annealed Co-Generation: A New AI Framework Tackles Scientific Complexity Through Pairwise Modeling
Researchers propose Annealed Co-Generation, a novel AI framework that simplifies multivariate generation in scientific applications by modeling variables in pairs rather than jointly. The approach reduces computational burden and data imbalance while maintaining coherence across complex systems.
GeoAI Framework Outperforms Benchmarks in Modeling Urban Traffic Flow
A new GeoAI hybrid framework combining MGWR, Random Forest, and ST-GCN models achieves 23-62% better accuracy in predicting multimodal urban traffic flows. The research highlights land use mix as the strongest predictor for vehicle traffic, with implications for urban planning and logistics.
Meta's V-JEPA 2.1 Achieves +20% Robotic Grasp Success with Dense Feature Learning from 1M+ Hours of Video
Meta researchers released V-JEPA 2.1, a video self-supervised learning model that learns dense spatial-temporal features from over 1 million hours of video. The approach improves robotic grasp success by ~20% over previous methods by forcing the model to understand precise object positions and movements.
FCUCR: A Federated Continual Framework for Learning Evolving User Preferences
Researchers propose FCUCR, a federated learning framework for recommendation systems that combats 'temporal forgetting' and enhances personalization without centralizing user data. This addresses a core challenge in building private, adaptive AI for customer-centric services.
Amazon's T-REX: A Transformer Architecture for Next-Basket Grocery Recommendations
Amazon researchers propose T-REX, a transformer-based model for grocery basket recommendations. It addresses unique challenges like repetitive purchases and sparse patterns through category-level modeling and causal masking, showing significant improvements in offline/online tests.
EpisTwin: A Neuro-Symbolic Framework for Personal AI Using Knowledge Graphs
Researchers propose EpisTwin, a neuro-symbolic architecture that builds a Personal Knowledge Graph from fragmented user data to enable complex, verifiable reasoning. It addresses limitations of standard RAG by capturing semantic topology and temporal dependencies.
TimeGS: How Computer Graphics Techniques Are Revolutionizing Time Series Forecasting
Researchers have introduced TimeGS, a novel AI framework that treats time series forecasting as a 2D rendering problem. By adapting Gaussian splatting techniques from computer graphics, the approach achieves state-of-the-art performance while maintaining temporal continuity.
Brain-OF: The First Unified AI Model That Reads Multiple Brain Signals Simultaneously
Researchers have developed Brain-OF, the first omnifunctional foundation model that jointly processes fMRI, EEG, and MEG brain signals. This unified approach overcomes previous single-modality limitations by integrating complementary spatiotemporal data through innovative architecture and pretraining techniques.
Beyond CGI: How Physics-Consistent 4D AI Will Transform Luxury Product Visualization
Phys4D's physics-consistent 4D modeling pipeline solves the 'uncanny valley' of AI-generated product videos, enabling hyper-realistic, physically plausible digital twins for luxury goods. This enables scalable, high-fidelity content creation for marketing, virtual try-on, and digital archives.
Google Open-Sources TimesFM: A 100B-Point Time Series Foundation Model for Zero-Shot Forecasting
Google has open-sourced TimesFM, a foundation model for time series forecasting trained on 100 billion real-world time points. It requires no dataset-specific training and can generate predictions instantly for domains like traffic, weather, and demand.
EVNextTrade: Learning-to-Rank Models for EV Charging Node Recommendation in Energy Trading
New research proposes EVNextTrade, a learning-to-rank framework for recommending optimal charging nodes for peer-to-peer EV energy trading. Using gradient-boosted models on urban mobility data, it addresses uncertainty in matching energy providers and consumers. LightGBM achieved near-perfect early-ranking performance (NDCG@1: 0.9795).
MMM4Rec: A New Multi-Modal Mamba Model for Faster, More Transferable Sequential Recommendations
Researchers propose MMM4Rec, a novel sequential recommendation framework using State Space Duality for efficient multi-modal learning. It claims 10x faster fine-tuning convergence and improved accuracy by dynamically prioritizing key visual/textual information over user interaction sequences.
Microsoft's VibeVoice Family Processes 60-Minute Audio in Single Pass, Eliminates Chunking for ASR & TTS
Microsoft open-sourced VibeVoice, a family of speech AI models that processes up to 60 minutes of audio without chunking. It delivers structured transcriptions with speaker diarization and generates 90-minute multi-speaker speech in one pass.
HyenaRec: A Polynomial-Based Architecture for Fast, Scalable Sequential Recommendation
Researchers propose HyenaRec, a novel sequential recommender using Legendre polynomial kernels and gated convolutions. It achieves better accuracy than attention-based models while training up to 6x faster, especially on long user histories. This addresses a critical efficiency bottleneck in next-item prediction.
LSA: A New Transformer Model for Dynamic Aspect-Based Recommendation
Researchers propose LSA, a Long-Short-term Aspect Interest Transformer, to model the dynamic nature of user preferences in aspect-based recommender systems. It improves prediction accuracy by 2.55% on average by weighting aspects from both recent and long-term behavior.
Morgan Stanley Predicts 10x Compute Spike to Double AI Intelligence, Highlights 18 GW Energy Crisis
Morgan Stanley forecasts a massive AI leap from a 10x increase in training compute, but warns of an 18-gigawatt U.S. power shortfall by 2028. The report claims GPT-5.4 matches human experts with 83% on GDPVal.
New RL-Guided Planning Framework Boosts Warehouse Robot Throughput
Researchers propose RL-RH-PP, a hybrid AI framework combining reinforcement learning with classical search for lifelong multi-agent path finding. It dynamically assigns robot priorities to reduce congestion, achieving higher throughput in simulations and generalizing across layouts.
LLM-Based System Achieves 68% Recall at 90% Precision for Online User Deanonymization
Researchers demonstrate that large language models can effectively deanonymize online users by analyzing their writing style and content across platforms. Their system matches 68% of true user pairs with 90% precision, significantly outperforming traditional methods.
Revisiting the Netflix Prize: A Technical Walkthrough of the Classic Matrix Factorization Approach
A developer recreates the core algorithm from the famous 2009 Netflix Prize paper on collaborative filtering via matrix factorization. This is a foundational look at the recommendation engine tech that predates modern deep learning.
TimeSqueeze: A New Method for Dynamic Patching in Time Series Forecasting
Researchers introduce TimeSqueeze, a dynamic patching mechanism for Transformer-based time series models. It adaptively segments sequences based on signal complexity, achieving up to 20x faster convergence and 8x higher data efficiency. This addresses a core trade-off between accuracy and computational cost in long-horizon forecasting.
CausalTimePrior: The Missing Link for AI That Understands Time and Cause
Researchers have introduced CausalTimePrior, a new framework to generate synthetic time series data with known interventions. This breakthrough addresses a critical gap in training AI models to understand causality over time, paving the way for foundation models in time series analysis.
Tuning-Free LLM Framework IKGR Builds Strong Recommender by Extracting Explicit User Intent
Researchers propose IKGR, a novel LLM-based recommender that constructs an intent-centric knowledge graph without model fine-tuning. It explicitly links users and items to extracted intents, showing strong performance on cold-start and long-tail items.
Beyond Words: Neural Cellular Automata Offer New Path to AI Intelligence
Researchers propose using neural cellular automata to generate synthetic data for pre-training language models, achieving up to 6% improvement in downstream performance while using 10x less data than natural language pre-training.
New Research Improves Text-to-3D Motion Retrieval with Interpretable Fine-Grained Alignment
Researchers propose a novel method for retrieving 3D human motion sequences from text descriptions using joint-angle motion images and token-patch interaction. It outperforms state-of-the-art methods on standard benchmarks while offering interpretable correspondences.
Guardian AI: How Markov Chains, RL, and LLMs Are Revolutionizing Missing-Child Search Operations
Researchers have developed Guardian, an AI system that combines interpretable Markov models, reinforcement learning, and LLM validation to create dynamic search plans for missing children during the critical first 72 hours. The system transforms unstructured case data into actionable geospatial predictions with built-in quality assurance.
VAST's $50M Funding Signals 3D AI Revolution: From Foundation Models to World Simulation
AI startup VAST has secured $50 million in Series A funding while advancing its 3D foundation models that are setting new industry standards. The company is preparing to launch its first world model, positioning itself at the forefront of spatial AI development.
PAI Emerges as Potential Game-Changer in AI Video Generation Landscape
PAI has launched publicly, offering a new approach to AI video generation that prioritizes character consistency and narrative coherence. Early testing suggests it may address key limitations of current video AI systems.
CoRe-BT: The Missing Piece for AI Brain Tumor Diagnosis
Researchers introduce CoRe-BT, a multimodal benchmark combining MRI, pathology images, and text reports for brain tumor typing. The dataset addresses real-world clinical challenges where diagnostic data is often incomplete, enabling more robust AI models for glioma classification.
Utonia AI Breakthrough: A Single Transformer Model Unifies All 3D Point Cloud Data
Researchers have developed Utonia, a single self-supervised transformer that learns unified 3D representations across diverse point cloud data types including LiDAR, CAD models, indoor scans, and video-lifted data. This breakthrough enables unprecedented cross-domain transfer and emergent behaviors in 3D AI.