non destructive testing
6 articles about non destructive testing in AI news
AgentGate: How an AI Swarm Tested and Verified a Progressive Trust Model for AI Agent Governance
A technical case study details how a coordinated swarm of nine AI agents attacked a governance system called AgentGate, surfaced a structural limitation in its bond-locking mechanism, and then verified the fix—a reputation-gated Progressive Trust Model. This provides a concrete example of the red-team → defense → re-test loop for securing autonomous AI systems.
Chinese Railway Robot Detects 0.1mm Rail Scratches, Performs Automated Grinding Repairs
A railway maintenance robot in China uses high-precision detection and automated grinding to find and repair surface scratches as small as 0.1mm. It also employs ultrasonic flaw detection to identify internal rail defects.
How to Structure Your Claude Code Project So It Scales Beyond Demos
A battle-tested project structure that separates skills by intent, leverages hooks, and integrates MCP servers to keep Claude Code reliable across real projects.
How I Built a Production AI Query Engine on 28 Tables — And Why I Used Both Text-to-SQL and Function Calling
A detailed case study on building a secure, production-grade AI query engine for an affiliate marketing ERP. The key innovation is a hybrid architecture using Text-to-SQL for complex analytics and MCP-based function calling for actions, secured by a 3-layer AST validator.
Beyond the Loss Function: New AI Architecture Embeds Physics Directly into Neural Networks for 10x Faster Wave Modeling
Researchers have developed a novel Physics-Embedded PINN that integrates wave physics directly into neural network architecture, achieving 10x faster convergence and dramatically reduced memory usage compared to traditional methods. This breakthrough enables large-scale 3D wave field reconstruction for applications from wireless communications to room acoustics.
The Privacy Paradox: How AI Agents Are Learning to Rewrite Sensitive Information Instead of Refusing
New research introduces SemSIEdit, an agentic framework that enables LLMs to self-correct and rewrite sensitive semantic information rather than refusing to answer. The approach reduces sensitive information leakage by 34.6% while maintaining utility, revealing a scale-dependent safety divergence in how different models handle privacy protection.