IRTUM – Institutional Repository of the Technical University of Moldova

A peft-optimized language model for curriculum alignment and educational coherence

Show simple item record

dc.contributor.author KAPUSTEANSKI, Maxim
dc.contributor.author DRUMEA, Nicu
dc.date.accessioned 2025-07-08T17:30:52Z
dc.date.available 2025-07-08T17:30:52Z
dc.date.issued 2025
dc.identifier.citation KAPUSTEANSKI, Maxim and Nicu DRUMEA. A peft-optimized language model for curriculum alignment and educational coherence. In: The 20th International Conference of Constructive Design and Technological Optimization in Machine Building Field: Conference Proceedings Abstracts, OPROTEH 2025, Bacau, România, 21-23 May, 2025. Bacau: "Alma Mater", 2025, pp. 188-189. ISSN 2457-3388. en_US
dc.identifier.issn 2457-3388
dc.identifier.uri https://repository.utm.md/handle/5014/32664
dc.description.abstract This paper presents the development and fine-tuning of a lightweight natural language processing (NLP) model, optimized through Parameter-Efficient Fine-Tuning (PEFT) strategies, aimed at enhancing curriculum alignment and educational coherence. Using the LoRA (Low-Rank Adaptation) method, the transformer-based model microsoft/phi-2 was adapted to analyze the consistency between curricular objectives, lecture content, laboratory guides, and final examination items. The project addresses the critical challenge of assessing content coverage and thematic coherence, a difficulty extensively discussed in educational research. The primary objectives were twofold: (i) to evaluate the degree of curricular coverage and logical alignment between teaching and testing materials, and (ii) to develop an AI-based assistant capable of generating curriculum-aware, pedagogically relevant responses to support learners. Romanian and Russian course materials were automatically translated into English using the Helsinki-NLP Opus-MT models, ensuring multilingual training adaptability. Data preparation involved structuring datasets in JSONL format, advanced tokenization, and adaptation for causal language modeling. To enable efficient deployment on resource-constrained hardware, the final model was quantized to 8-bit precision utilizing the bitsandbytes optimization library. By leveraging PEFT methods such as LoRA, the fine-tuning process achieved substantial reductions in computational resource usage without degrading output quality.Model evaluation, conducted through a multilingual interactive script with real-time translation, confirmed the model’s ability to diagnose curriculum gaps, redundancies, and inconsistencies while maintaining high coherence and relevance in response generation. The proposed methodology contributes to educational NLP research by offering a scalable, resource-efficient approach to training specialized AI assistants, directly addressing the need for systematic evaluation frameworks in content coherence analysis. en_US
dc.language.iso en en_US
dc.publisher "Alma Mater" Publishing House, Bacau en_US
dc.rights Attribution-NonCommercial-NoDerivs 3.0 United States *
dc.rights.uri http://creativecommons.org/licenses/by-nc-nd/3.0/us/ *
dc.subject multilingual ai en_US
dc.subject transformer models en_US
dc.subject curriculum alignment en_US
dc.subject educational coherence en_US
dc.title A peft-optimized language model for curriculum alignment and educational coherence en_US
dc.type Article en_US


Files in this item

The following license files are associated with this item:

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States

Search DSpace


Browse

My Account