<?xml version="1.0"?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-05-17T02:04:31Z</responseDate><request verb="GetRecord" metadataPrefix="oai_dc">https://keep.lib.asu.edu/oai/request</request><GetRecord><record><header><identifier>oai:keep.lib.asu.edu:node-200325</identifier><datestamp>2025-06-06T18:51:24Z</datestamp><setSpec>oai_pmh:all</setSpec><setSpec>oai_pmh:repo_items</setSpec></header><metadata><oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><dc:identifier>200325</dc:identifier>
          <dc:identifier>https://hdl.handle.net/2286/R.2.N.200325</dc:identifier>
                  <dc:rights>http://rightsstatements.org/vocab/InC/1.0/</dc:rights>
          <dc:rights>http://creativecommons.org/licenses/by-nc-sa/4.0</dc:rights>
                  <dc:date>2025-05</dc:date>
                  <dc:format>58 pages</dc:format>
                  <dc:contributor>Rajan, Vijval</dc:contributor>
          <dc:contributor>Lu, Tian</dc:contributor>
          <dc:contributor>Sha, Xiqing</dc:contributor>
          <dc:contributor>Balachander, Jayaram</dc:contributor>
          <dc:contributor>Barrett, The Honors College</dc:contributor>
          <dc:contributor>School of Mathematical and Statistical Sciences</dc:contributor>
                  <dc:description>Small language models (SLMs) are useful for their accessibility and low computational requirements. Earlier work concluded that the performance of SLMs do not increase dramatically with Chain of Thought (CoT) prompting, but the recent advances in the training of SLMs ask whether the ability of these models can be improved with CoT. This work examines the performance of 5 language models, mathstral, phi3-mini, llama3.1-2b, qwen2.5-2b, and gemma2-2b on a dataset of problems with increasing difficulty from AMC8 and AMC10 to ask whether CoT prompting improves mathematical ability. This work also examines whether parameter size or number of exemplars provided affects performance. The results show that the CoT prompting improved the accuracy of all models on the datasets, 1-shot prompting was more effective than 3-shot prompting, and that parameter size tested did not impact the mathematical ability.</dc:description>
                  <dc:subject>Artificial Intelligence</dc:subject>
          <dc:subject>Language Model Evaluation</dc:subject>
          <dc:subject>Prompting</dc:subject>
                  <dc:title>Investigating Few Shot Chain-of-Thought Prompting on the Mathematical Ability of Small Language Models</dc:title></oai_dc:dc></metadata></record></GetRecord></OAI-PMH>
