Coming soon
Things will be up and running here shortly, but you can subscribe in the meantime if you'd like to stay up to date and receive emails when new content is published!
Things will be up and running here shortly, but you can subscribe in the meantime if you'd like to stay up to date and receive emails when new content is published!
link: https://arxiv.org/abs/2503.16022 Mario Sanz-Guerrero¹ and Katharina von der Wense¹,² (¹Johannes Gutenberg University Mainz, Germany; ²University of Colorado Boulder, USA) This paper introduces and evaluates "Corrective In-Context Learning" (CICL), a novel approach intended to improve in-context learning in large language models by incorporating self-correction
link: https://arxiv.org/abs/2407.13690 Authors: Yuxuan Tong (Tsinghua University), Xiwen Zhang (Helixon Research), Rui Wang (Helixon Research), Ruidong Wu (Helixon Research), Junxian He (HKUST) Introduction Mathematical reasoning remains one of the most challenging domains for large language models (LLMs). Despite recent advances, even state-of-the-art models struggle with
link: https://arxiv.org/abs/2503.16212 Paper by: Qizhi Pei, Lijun Wu, Zhuoshi Pan, Yu Li, Honglin Lin, Chenlin Ming, Xin Gao, Conghui He, Rui Yan Introduction Mathematical reasoning remains a critical benchmark for assessing the cognitive capabilities of Large Language Models (LLMs). While significant progress has been made
link: https://arxiv.org/pdf/2502.02533v1 Han Zhou¹ ², Xingchen Wan¹, Ruoxi Sun¹, Hamid Palangi¹, Shariq Iqbal¹, Ivan Vulić¹ ², Anna Korhonen² and Sercan Ö. Arık¹ ¹Google, ²University of Cambridge I'd be happy to convert this academic paper into a dialogue. I'll create a lab meeting discussion