LLMs
-

BAGEL from ByteDance, an open-source multimodal AI
Have you ever wondered how AI can generate detailed captions from images, answer…
-

Universal geometry links multimodal embedding spaces
What if all the embedding spaces used in AI — text, images, speech,…
-

OpenAI launched HealthBench to test LLM safety in health
Large language models are rapidly entering the healthcare space. But how do we…
-

Absolute Zero – AI training without any human data
Imagine an AI that improves without human-labeled data and curated datasets. Just pure…
-

Qwen3 by Alibaba, a new open-source model with hybrid reasoning
Released on April 28, 2025, Qwen3 is an open-source multimodal LLM that extends…
-

Meta’s Llama 4, advanced multimodal models with long context
Meta released Llama 4, a new suite of AI models which offers advanced…
-

Gemma 3 matches 98% DeepSeek-R1 and runs on a single GPU or TPU
Gemma 3, Google’s latest AI model, offers multi-modal capabilities and achieves 98% of…
-

Baidu released two advanced LLMs, ERNIE 4.5 and ERNIE X1
Chinese technology giant Baidu is challenging leading AI models with its most recent…
-

SWE-RL enhances LLMs coding capabilities
Meta introduces SWE-RL, marking the first time reinforcement learning has been used to…
-

DeepSeek-R1 revolutionizes the AI landscape
The Chinese AI startup DeepSeek has made a breakthrough in AI with the…