scientific method
30 articles about scientific method in AI news
Nature Astronomy Paper Argues LLMs Threaten Scientific Authorship, Sparking AI Ethics Debate
A paper in Nature Astronomy posits a novel criterion for scientific contribution: if an LLM can easily replicate it, it may not be sufficiently novel. This directly challenges the perceived value of incremental, LLM-augmented research.
Meta's QTT Method Fixes Long-Context LLM 'Buried Facts' Problem, Boosts Retrieval Accuracy
Meta researchers identified a failure mode where LLMs with 128K+ context windows miss information buried in the middle of documents. Their Query-only Test-Time Training (QTT) method adapts models at inference, significantly improving retrieval accuracy.
Ethan Mollick Critiques Scientific Publishing's AI Inertia: PDFs Still Dominate in 2026
Wharton professor Ethan Mollick highlights that scientific papers in 2026 are still primarily uploaded as formatted PDFs to restrictive academic archives, signaling slow adaptation to AI's potential for accelerating research.
ChatGPT GPT-5.4 Pro's 'Thinking' Harness Shows Advanced Scientific Paper Comprehension, Including Figure Analysis
OpenAI's ChatGPT GPT-5.4 Pro, with its 'Thinking' harness, demonstrates advanced multimodal understanding of scientific papers, identifying key figures and extracting visual information beyond text parsing.
Claude Code's New Research Mode: How to Apply Scientific Coding Breakthroughs to Your Projects
Claude Code's Research Mode, powered by Opus 4.6, can accelerate complex scientific coding. Here's how to configure it for your own data-intensive workflows.
Stanford and Munich Researchers Pioneer Tool Verification Method to Prevent AI's Self-Training Pitfalls
Researchers from Stanford and the University of Munich have developed a novel verification system that uses code checkers to prevent AI models from reinforcing incorrect patterns during self-training. The method dramatically improves mathematical reasoning accuracy by up to 31.6%.
Annealed Co-Generation: A New AI Framework Tackles Scientific Complexity Through Pairwise Modeling
Researchers propose Annealed Co-Generation, a novel AI framework that simplifies multivariate generation in scientific applications by modeling variables in pairs rather than jointly. The approach reduces computational burden and data imbalance while maintaining coherence across complex systems.
AI Bridges the Gap Between Data and Discovery: New Framework Aligns Scientific Observations with Decades of Literature
Researchers have developed a novel AI framework that aligns X-ray spectra with scientific literature using contrastive learning. This multimodal approach improves physical variable estimation by 16-18% and identifies high-priority astronomical targets, demonstrating how AI can accelerate scientific discovery by connecting data with domain knowledge.
EmbodiedAct: How Active AI Agents Are Revolutionizing Scientific Simulation
Researchers have developed EmbodiedAct, a framework that transforms scientific software into active AI agents with real-time perception. This breakthrough addresses critical limitations in how LLMs interact with physical simulations, enabling more reliable scientific discovery through embodied actions.
AI's Causal Reasoning Gap: New Method Tests How Well Models Understand 'What If' Scenarios
Researchers introduce Double Counterfactual Consistency (DCC), a training-free method to evaluate and improve LLMs' causal reasoning. The technique reveals significant weaknesses in how models handle hypothetical scenarios and counterfactual thinking, addressing a critical limitation in current AI systems.
ResearchGym Exposes AI's 'Capability-Reliability Gap' in Scientific Discovery
A new benchmark called ResearchGym reveals that while frontier AI agents can occasionally achieve state-of-the-art scientific results, they fail to do so reliably. In controlled evaluations, agents completed only 26.5% of research sub-tasks on average, highlighting critical limitations in autonomous scientific discovery.
AI Crosses the Rubicon: From Scientific Tool to Active Discovery Partner
This week marked a paradigm shift as AI systems transitioned from research tools to active participants in scientific discovery. OpenAI's GPT-5.2 Pro helped conjecture a new formula in particle physics, while Google's Gemini 3 Deep Think achieved unprecedented results on reasoning benchmarks. These developments signal AI's growing capacity for genuine scientific contribution.
AI Gets a Confidence Meter: New Method Tackles LLM Hallucinations in Interpretable Models
Researchers propose an uncertainty-aware framework for Concept Bottleneck Models that quantifies and incorporates the reliability of LLM-generated concept labels, addressing critical hallucination risks while maintaining model interpretability.
Frozen Giants Aligned: New AI Method Bridges Vision and Language Without Training
Researchers have developed HDFLIM, a novel framework that aligns powerful frozen vision and language models using hyperdimensional computing. This approach enables efficient image captioning without computationally intensive fine-tuning, preserving original model capabilities while creating cross-modal understanding.
Teaching AI to Think Before It Speaks: New Method Boosts Reasoning Stability
Researchers have developed Metacognitive Behavioral Tuning (MBT), a framework that teaches large language models human-like self-regulation during complex reasoning. This approach addresses the 'reasoning collapse' phenomenon where models fail despite correct intermediate steps, achieving higher accuracy with fewer computational resources.
AI Research Loop Paper Claims Automated Experimentation Can Accelerate AI Development
A shared paper highlights research into using AI to run a mostly automated loop of experiments, suggesting a method to speed up AI research itself. The source notes a potential problem with the approach but does not specify details.
Mercor Data Breach Exposes Expert Human Annotation Pipeline Used by Frontier AI Labs
Hackers have reportedly accessed Mercor's expert human data collection systems, which are used by leading AI labs to build foundation models. This breach could expose proprietary training methodologies and sensitive model development data.
Diffusion Recommender Models Fail Reproducibility Test: Study Finds 'Illusion of Progress' in Top-N Recommendation Research
A reproducibility study of nine recent diffusion-based recommender models finds only 25% of reported results are reproducible. Well-tuned simpler baselines outperform the complex models, revealing a conceptual mismatch and widespread methodological flaws in the field.
Kyushu University AI Model Achieves 44.4% Solar Cell Efficiency, Surpassing Theoretical SQ Limit
Researchers at Kyushu University used an AI-driven inverse design method to create a photonic crystal solar cell with 44.4% efficiency, exceeding the 33.7% Shockley-Queisser limit for single-junction cells.
DST: Domain-Specialized Tree of Thought Cuts Computational Overhead by 26-75% with Plug-and-Play Predictors
Researchers introduce DST, a plug-and-play predictor that guides Tree of Thought reasoning with lightweight supervised heuristics. The method matches or exceeds standard ToT accuracy while reducing computational costs by 26-75% across mathematical and logical reasoning benchmarks.
LLM-Driven Heuristic Synthesis for Industrial Process Control: Lessons from Hot Steel Rolling
Researchers propose a framework where an LLM iteratively writes and refines human-readable Python controllers for industrial processes, using feedback from a physics simulator. The method generates auditable, verifiable code and employs a principled budget strategy, eliminating need for problem-specific tuning.
Anthropic Launches Dedicated Science Blog to Chronicle AI Research and Applications
Anthropic has launched a new Science Blog to publish its research and case studies on using AI to accelerate scientific discovery, aligning with its mission to increase the pace of scientific progress.
AI Learns to Use Tools Without Expensive Training: The Rise of In-Context Reinforcement Learning
Researchers have developed In-Context Reinforcement Learning (ICRL), a method that teaches large language models to use external tools through demonstration examples during reinforcement learning. This approach eliminates costly supervised fine-tuning while enabling models to gradually transition from few-shot to zero-shot tool usage capabilities.
AI Research Accelerator: Autonomous System Completes 700 Experiments in 48 Hours, Optimizing Model Training
An AI system autonomously conducted 700 experiments over two days, reducing GPT-2 training time by 11%. This breakthrough demonstrates AI's growing capability to accelerate scientific research and optimize complex processes without human intervention.
New AI Framework Uses Diffusion Models to Authenticate Anti-Counterfeit Codes
Researchers propose a novel diffusion-based AI system to authenticate Copy Detection Patterns (CDPs), a key anti-counterfeiting technology. It outperforms existing methods by classifying printer signatures, showing resilience against unseen counterfeits.
MAPLE: How Process-Aligned Rewards Are Solving AI's Medical Reasoning Crisis
Researchers introduce MAPLE, a new AI training paradigm that replaces statistical consensus with expert-aligned process rewards for medical reasoning. This approach ensures clinical correctness over mere popularity in medical LLMs, significantly outperforming current methods.
Temporal Freedom: How Unrestricted Data Access Could Revolutionize LLM Performance
Researchers at Tsinghua University have discovered that allowing Large Language Models to freely search through temporal data significantly outperforms traditional rigid pipeline approaches and costly retrieval methods. This breakthrough suggests a paradigm shift in how we structure AI information access.
AI Now Surpasses Human Experts in Technical Domains, Study Finds
New research mapping AI capabilities to human expertise reveals frontier models have already surpassed domain experts in technical and scientific benchmarks. The study forecasts AI will reach top-performer human levels by late 2027.
From Code to Discovery: The Next Frontier of AI Agents in Research
AI researcher Omar Saray predicts a shift from 'agentic coding' to 'agentic research'—where AI systems will autonomously conduct scientific discovery. This evolution promises to accelerate innovation across disciplines.
Beyond Unit Tests: How AI Critics Learn from Sparse Human Feedback to Revolutionize Coding Assistants
Researchers have developed a novel method to train AI critics using sparse, real-world human feedback rather than just unit tests. This approach bridges the gap between academic benchmarks and practical coding assistance, improving performance by 15.9% on SWE-bench through better trajectory selection and early stopping.