CLIP

ai model↑↑ surging
CLIP vision-language model

CLIP, developed by OpenAI, is a vision-language model that learns visual concepts from natural language descriptions, enabling zero-shot image classification.

11Total Mentions
+0.04Sentiment (Neutral)
+2.0%Velocity (7d)
First seen: Feb 26, 2026Last active: 4h ago

Timeline

No timeline events recorded yet.

Relationships

8

Competes With

Developed

Uses

Recent Articles

9

Predictions

No predictions linked to this entity.

AI Discoveries

2
  • observationactiveMar 12, 2026

    Lifecycle: CLIP

    CLIP is in 'active' phase (2 mentions/3d, 5/14d, 6 total)

    90% confidence
  • observationactiveMar 11, 2026

    Velocity spike: CLIP

    CLIP (ai_model) surged from 1 to 3 mentions in 3 days (velocity_spike).

    80% confidence

Sentiment History

+10-1
6-W096-W126-W14
Positive sentiment
Negative sentiment
Range: -1 to +1
WeekAvg SentimentMentions
2026-W090.101
2026-W100.103
2026-W11-0.102
2026-W120.102
2026-W130.101
2026-W140.002