Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Is XRP at Risk of Falling Below $1?

    March 7, 2026

    How to Use GameShare to Play on Nintendo Switch 1

    March 7, 2026

    Latest PITB Vacancies March 2026 Advertisement

    March 7, 2026
    Facebook X (Twitter) Instagram
    Saturday, March 7
    Trending
    • Is XRP at Risk of Falling Below $1?
    • How to Use GameShare to Play on Nintendo Switch 1
    • Latest PITB Vacancies March 2026 Advertisement
    • Penticton, B.C. murder victim’s mother frustrated as delays push trial
    • Netflix’s Vladimir makes use of unreliable narrator to problem viewers
    • My Chronic Kidney cb | Blue Heron Health News
    • Nintendo sues the US authorities for a refund on tariffs
    • PSX down 3,715 factors amid Gulf struggle
    • Karachi’s cafes sell more than just lattes
    • Neighborhood Banks, Crypto Trade ‘Are Allies’ In CLARITY Act Conflict: Exec
    Facebook X (Twitter) Instagram Pinterest Vimeo
    The News92The News92
    • Home
    • World
    • National
    • Sports
    • Crypto
    • Travel
    • Lifestyle
    • Jobs
    • Insurance
    • Gaming
    • AI & Tech
    • Health & Fitness
    The News92The News92
    Home - AI & Tech - Microsoft Releases Phi-4-Reasoning-Vision-15B: A Compact Multimodal Model for Math, Science, and GUI Understanding
    AI & Tech

    Microsoft Releases Phi-4-Reasoning-Vision-15B: A Compact Multimodal Model for Math, Science, and GUI Understanding

    Naveed AhmadBy Naveed AhmadMarch 7, 2026No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Microsoft has released Phi-4-reasoning-vision-15B, a 15 billion parameter open-weight multimodal reasoning model designed for image and text tasks that require both perception and selective reasoning. It is a compact model built to balance reasoning quality, compute efficiency, and training-data requirements, with particular strength in scientific and mathematical reasoning and understanding user interfaces.

    https://arxiv.org/pdf/2603.03975

    What the model is built on?

    Phi-4-reasoning-vision-15B combines the Phi-4-Reasoning language backbone with the SigLIP-2 vision encoder using a mid-fusion architecture. In this setup, the vision encoder first converts images into visual tokens, then those tokens are projected into the language model embedding space and processed by the pretrained language model. This design acts as a practical trade-off: it preserves strong cross-modal reasoning while keeping training and inference costs manageable compared with heavier early-fusion designs.

    https://arxiv.org/pdf/2603.03975

    Why Microsoft took the smaller-model route?

    Many recent vision-language models have grown in parameter count and token usage, which raises both latency and deployment cost. Phi-4-reasoning-vision-15B was built as a smaller alternative that still handles common multimodal workloads without relying on extremely large training datasets or excessive inference-time token generation. The model was trained on 200 billion multimodal tokens, building on Phi-4-Reasoning, which was trained on 16 billion tokens, and ultimately on the Phi-4 base model, which was trained on 400 billion unique tokens. Microsoft contrasts that with the more than 1 trillion tokens used to train several recent multimodal models such as Qwen 2.5 VL, Qwen 3 VL, Kimi-VL, and Gemma 3.

    https://arxiv.org/pdf/2603.03975

    High-resolution perception was a core design choice

    Microsoft team explains one of the more useful technical lessons in their technical report that multimodal reasoning often fails because perception fails first. Models can miss the answer not because they lack reasoning ability, but because they fail to extract the relevant visual details from dense images such as screenshots, documents, or interfaces with small interactive elements.

    Phi-4-reasoning-vision-15B uses a dynamic resolution vision encoder with up to 3,600 visual tokens, which is intended to support high-resolution understanding for tasks such as GUI grounding and fine-grained document analysis. The Microsoft team states that high-resolution, dynamic-resolution encoders yield consistent improvements, and explicitly notes that accurate perception is a prerequisite for high-quality reasoning.

    Mixed reasoning instead of forcing reasoning everywhere

    A second important design decision is the model’s mixed reasoning and non-reasoning training strategy. Rather than forcing chain-of-thought-style reasoning for all tasks, Microsoft team trained the model to switch between two modes. Reasoning samples include ... traces, while non-reasoning samples begin with and are used for perception-focused tasks such as captioning, grounding, OCR, and simple VQA. The reasoning data makes up about 20% of the overall training mixture.

    The goal of this hybrid setup is to let the model respond directly on tasks where longer reasoning adds latency without improving accuracy, while still invoking structured reasoning on tasks such as math and science. Microsoft team also notes an important limitation: the boundary between these modes is learned implicitly, so switching is not always optimal. Users can override the default behavior through explicit prompting with or tokens.

    What areas are stronger?

    Microsoft team highlights 2 main application areas. The first is scientific and mathematical reasoning over visual inputs, including handwritten equations, diagrams, charts, tables, and quantitative documents. The second is computer-use agent tasks, where the model interprets screen content, localizes GUI elements, and supports interaction with desktop, web, or mobile interfaces.

    https://arxiv.org/pdf/2603.03975

    Benchmark results

    Microsoft team reports the following benchmark scores for Phi-4-reasoning-vision-15B: 84.8 on AI2DTEST, 83.3 on ChartQATEST, 44.9 on MathVerseMINI, 36.2 on MathVisionMINI, 75.2 on MathVistaMINI, 54.3 on MMMUVAL, 64.5 on MMStar, 76.0 on OCRBench, and 88.2 on ScreenSpotv2. The technical report also notes that these results were generated using Eureka ML Insights and VLMEvalKit, with fixed evaluation settings, and that Microsoft team presents them as comparison results rather than leaderboard claims.

    Key Takeaways

    • Phi-4-reasoning-vision-15B is a 15B open-weight multimodal model built by combining Phi-4-Reasoning with the SigLIP-2 vision encoder in a mid-fusion architecture.
    • Microsoft team designed the model for compact multimodal reasoning, with a focus on math, science, document understanding, and GUI grounding, rather than scaling to a much larger parameter count.
    • High-resolution visual perception is a core part of the system, with support for dynamic resolution encoding and up to 3,600 visual tokens, which helps on dense screenshots, documents, and interface-heavy tasks.
    • The model uses mixed reasoning and non-reasoning training, allowing it to switch between and modes depending on whether a task needs explicit reasoning or direct perception-based output.
    • Microsoft’s reported benchmarks show strong performance for its size, including results on AI2DTEST, ChartQATEST, MathVistaMINI, OCRBench, and ScreenSpotv2, which supports its positioning as a compact but capable vision-language reasoning model.

    Check out the Paper, Repo and Model Weights. Also, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.




    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleREAP, China envoy talk about enhancing commerce and financial cooperation
    Next Article Govt hikes petrol, high-speed diesel prices by Rs55 per litre – Business
    Naveed Ahmad
    • Website
    • Tumblr

    Related Posts

    AI & Tech

    Nintendo sues the US authorities for a refund on tariffs

    March 7, 2026
    AI & Tech

    This Jammer Wants to Block Always-Listening AI Wearables. It Probably Won’t Work

    March 7, 2026
    AI & Tech

    A Production-Style NetworKit 11.2.1 Coding Tutorial for Large-Scale Graph Analytics, Communities, Cores, and Sparsification

    March 7, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Demo
    Top Posts

    How to Get a Bigger Penis – The Stem Cell Secret to Natural Penis Enlargement & A Quiz

    February 22, 20261 Views

    Is XRP at Risk of Falling Below $1?

    March 7, 20260 Views

    How to Use GameShare to Play on Nintendo Switch 1

    March 7, 20260 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Demo
    Most Popular

    How to Get a Bigger Penis – The Stem Cell Secret to Natural Penis Enlargement & A Quiz

    February 22, 20261 Views

    Is XRP at Risk of Falling Below $1?

    March 7, 20260 Views

    How to Use GameShare to Play on Nintendo Switch 1

    March 7, 20260 Views
    Our Picks

    Is XRP at Risk of Falling Below $1?

    March 7, 2026

    How to Use GameShare to Play on Nintendo Switch 1

    March 7, 2026

    Latest PITB Vacancies March 2026 Advertisement

    March 7, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions
    • Advertise
    • Disclaimer
    © 2026 TheNews92.com. All Rights Reserved. Unauthorized reproduction or redistribution of content is strictly prohibited.

    Type above and press Enter to search. Press Esc to cancel.