Close Menu
My Blog

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Nautilus debuts Voyager platform in push toward next-gen proteomics

    March 1, 2026

    First-in-Human Success for Prenatal Stem Cell Therapy in Spina Bifida

    February 28, 2026

    Pressure-Driven Pathway Links Liver Congestion to Fibrosis and Cancer

    February 28, 2026
    Facebook X (Twitter) Instagram
    X (Twitter) YouTube
    My BlogMy Blog
    Sunday, March 1
    • Home
    • About Us
    • Healthy Living
    • DNA & Genetics
    • Podcast
    • Shop
    My Blog
    Home»Gut Health»‘Tiny’ AI model beats massive LLMs at logic test
    Gut Health

    ‘Tiny’ AI model beats massive LLMs at logic test

    adminBy adminNovember 13, 2025No Comments4 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
    'Tiny' AI model beats massive LLMs at logic test
    Share
    Facebook Twitter LinkedIn Pinterest Email

    A Tiny Reasoning Model beat Large Language Models in solving logic puzzles, despite being trained on a much smaller dataset. Credit: Getty

    A small-scale artificial-intelligence model that learns from only a limited pool of data is exciting researchers for its potential to boost reasoning abilities. The model, known as Tiny Recursive Model (TRM), outperformed some of the world’s best large language models (LLMs) at the Abstract and Reasoning Corpus for Artificial General Intelligence (ARC-AGI), a test involving visual logic puzzles that is designed to flummox most machines.

    The model — detailed in a preprint on the arXiv server last month1 — is not readily comparable to an LLM. It is highly specialized, excelling only on the type of logic puzzles on which it is trained, such as sudokus and mazes, and it doesn’t ‘understand’ or generate language. But its ability to perform so well on so few resources — it is 10,000 times smaller than frontier LLMs — suggests a possible route for boosting this capability more widely in AI, say researchers.

    “It’s fascinating research into other forms of reasoning that one day might get used in LLMs,” says Cong Lu, a machine-learning researcher formerly at the University of British Columbia in Vancouver, Canada. However, he cautions that the techniques might no longer be as effective if applied on a much larger scale. “Often techniques work very well at small model sizes and then just stop working,” at a bigger scale, he says.

    A test of artificial intelligence

    “The results are very significant in my opinion,” says François Chollet, co-founder of AI firm Ndea, who created the ARC-AGI test. Because such models need to be trained from scratch on each new problem, they are “relatively impractical”, but “I expect a lot more research to come out that will build on top of these results”, he adds.

    The sole author of the paper — Alexia Jolicoeur-Martineau, an AI researcher at Samsung’s Advanced Institute of Technology in Montreal, Canada — says that her model shows that the idea that only massive models that cost millions of dollars to train can succeed at hard tasks “is a trap”. She has made the model’s code openly available on Github for anyone to download and modify. “Currently, there is too much focus on exploiting LLMs rather than devising and expanding new lines of direction,” she wrote on her blog.

    Tiny model, big results

    Most reasoning models are built on top of LLMs, which predict the next word in a sequence by tapping into billions of learned internal connections, known as parameters. They excel by memorizing patterns from billions of documents, which can trip them up when they come to unpredictable logic puzzles.

    The TRM takes a different approach. Jolicoeur-Martineau was inspired by a technique known as the hierarchical reasoning model, developed by the AI firm Sapient Intelligence in Singapore. The hierarchical reasoning model improves its answer through multiple iterations and was published in a preprint in June2.

    The TRM uses a similar approach, but uses just 7 million parameters, compared with 27 million for the hierarchical model and billions or trillions for LLMs. For each puzzle type the algorithm learns, such as a sudoku, Jolicoeur-Martineau trained a brain-inspired architecture known as a neural network on around 1,000 examples, formatted as a string of numbers.

    How AI agents will change research: a scientist’s guide

    During training, the model guesses the solution and then compares it with the correct answer, before refining its guess and repeating the process. In this way, it learns strategies to improve its guesses. The model then takes a similar approach to solve unseen puzzles of the same type, successively refining its answer up to 16 times before generating a response.

    beats LLMs logic massive model Test Tiny
    Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
    Previous ArticleHow to Choose the Best Cottage Cheese
    Next Article Melting Spanakopita Cabbage
    admin
    • Website

    Related Posts

    The ins and outs of the new Dietary Guidelines for Americans in light of the gut microbiome

    February 25, 2026

    Scienta’s new AI model targets aging-linked inflammation

    February 24, 2026

    Prima Mente Unlocks Early-Stage Alzheimer’s Diagnostics with Epigenome Model

    February 17, 2026

    Patrick Veiga – Gut Microbiota for Health

    February 17, 2026
    Leave A Reply Cancel Reply

    Our Picks

    9 Time-Saving Kitchen Gadgets for Fall at Amazon

    September 5, 2025

    Why Exercise Is So Important For Heart Health, From An MD

    September 5, 2025

    An Engineered Protein Helps Phagocytes Gobble Up Diseased Cells

    September 5, 2025

    How To Get Rid Of Hangnails + Causes From Experts

    September 5, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Longevity

    Nautilus debuts Voyager platform in push toward next-gen proteomics

    By adminMarch 1, 20260

    Company’s new benchtop system promises a clearer view of proteins following validation at a leading…

    First-in-Human Success for Prenatal Stem Cell Therapy in Spina Bifida

    February 28, 2026

    Pressure-Driven Pathway Links Liver Congestion to Fibrosis and Cancer

    February 28, 2026

    A cellular atlas of aging comes into focus

    February 28, 2026

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us

    At FineGut, our mission is simple: to enhance your self-awareness when it comes to your gut health. We believe that a healthy gut is the foundation of overall well-being, and understanding the brain–gut connection can truly transform the way you live.

    Our Picks

    9 Time-Saving Kitchen Gadgets for Fall at Amazon

    September 5, 2025

    Why Exercise Is So Important For Heart Health, From An MD

    September 5, 2025

    An Engineered Protein Helps Phagocytes Gobble Up Diseased Cells

    September 5, 2025
    Gut Health

    Nautilus debuts Voyager platform in push toward next-gen proteomics

    March 1, 2026

    First-in-Human Success for Prenatal Stem Cell Therapy in Spina Bifida

    February 28, 2026

    Pressure-Driven Pathway Links Liver Congestion to Fibrosis and Cancer

    February 28, 2026
    X (Twitter) YouTube
    • Contact us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    © 2026 finegut.com. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.