AI Lab

D
H
M
S
CONVOCATORIA CERRADA

Impact of Quantization on Small Language Models (SLMs) for Multilingual Mathematical Reasoning Tasks

By: Angie Paola Giraldo Ramírez
Angie Paola Giraldo Ramírez

Small Language Models (SLMs) offer a lightweight alternative for NLP on resource‐constrained devices, and techniques like quantization can further reduce their computational footprint. However, most quantization research has focused on large models, leaving unanswered questions about how it affects SLMs’ ability to solve complex, multilingual math problems—especially in non-Latin alphabets. Despite evidence that compressed SLMs can still reason effectively, we lack systematic studies on their performance in languages like Spanish, English, French, and German.

This project will quantify how varying levels of quantization impact SLMs’ accuracy on math‐reasoning tasks across those four languages. It pays special attention to “double low-resource” settings—regions with limited data and hardware—where lightweight, reliable models could have the greatest impact. Beyond performance metrics, the study addresses AI safety (by uncovering bias or error patterns) and governance (by informing inclusive policy and evaluation standards), aiming to ensure that compressed models remain robust, fair, and broadly accessible.