In this article, we cover inference code for SmolVLM2. We carry out image and video inference experiments using SmolVLM2-2.2B-Instruct and SmolVLM2-256M-Instruct. ...
Getting Started with SmolVLM2 – Code Inference

In this article, we cover inference code for SmolVLM2. We carry out image and video inference experiments using SmolVLM2-2.2B-Instruct and SmolVLM2-256M-Instruct. ...
In this article, we explore Qwen2.5-Omni, a multimodal generative AI model that can accept text, image, video, and audio as inputs while outputting both text and audio. ...
In this article, we are fine-tuning the SmolVLM-256M model for receipt OCR on the SROIE v2 dataset after generating the ground truth data using QwenVL-2B model. ...
In this article, we explore Gemma 3. We start with the need for Gemma 3, its architecture and multimodal capabilities, and carry out inference using Hugging Face. ...
In this article, we cover the SmolVLM model by Hugging Face. It is a compact 2.2B parameter model for vision understanding. ...
Business WordPress Theme copyright 2025