IRTUM – Institutional Repository of the Technical University of Moldova

Security implications and mitigation strategies for Ai Agent Frameworks in backend systems

Show simple item record

dc.contributor.advisor ȚURCANU, Dinu
dc.contributor.advisor COJOCARU, Svetlana
dc.contributor.author MOCANU, Liviu
dc.date.accessioned 2026-03-02T11:57:11Z
dc.date.available 2026-03-02T11:57:11Z
dc.date.issued 2026
dc.identifier.citation MOCANU, Liviu. Security implications and mitigation strategies for Ai Agent Frameworks in backend systems. Teză de master. Programul de studiu Securitate Informaţională. Conducător ştiinţific ȚURCANU Dinu, dr., conf. univ. Universitatea Tehnică a Moldovei. Chișinău, 2026. en_US
dc.identifier.uri https://repository.utm.md/handle/5014/35535
dc.description Fişierul ataşat conţine: Rezumat, Abstract, Cuprins, Introducere, Bibliografie. en_US
dc.description.abstract Această teză cercetează implicatiile de securitate în sistemele backend bazate pe LLM prin examinarea diferentelor arhitecturale între framework-urile bazate pe servicii si cele agentice. Analiza demonstrează modul în care alegerile de design arhitectural formează profile de securitate distincte, modul în care vectorii de atac LLM stabiliti se manifestă diferit în aceste paradigme, si validează predictiile teoretice prin testare de securitate a unei implementări multi-agent. Analiza teoretică arată că framework-urile bazate pe servicii mentin un flux de execuție controlat de aplicatie, permitând validarea în timpul dezvoltării, în timp ce framework-urile agentice acordă LLM-urilor luarea deciziilor autonome, introducând combinatii imprevizibile de instrumente, manipularea obiectivelor, si efecte în cascadă în arhitecturi multi-agent. Vulnerabilitătile prezintă riscuri amplificate în contexte agentice: injectia de prompturi poate deturna obiectivele ageilor, jailbreak-urile în mai multe runde exploatează contextul extins, iar extragerea datelor amenintă confidentialitatea în mai multe executii de instrumente. Testarea de securitate a unei implementări multi-agent Google ADK demonstrează că atacurile directe, obtin doar rate de succes de 7%, în timp ce atacurile de escaladare graduală, în mai multe runde, obtin 47% succes, subliniind rolul critic al strategiei de executie. Mecanismele de apărare în profunzime multi-agent s-au dovedit eficiente, agentii ulteriori blocând cererile malicioase chiar si când agentii initiali au fost ocoliti. Strategiile de atenuare abordează provocările unice de securitate ale sistemelor agentice prin design arhitectural care maschează pasii intermediari si ascunde detaliile interne, apărare în profunzime prin mecanisme de securitate stratificate ale agentilor, si monitorizare operatională pentru detectarea atacurilor de eroziune a limitelor. Aceste strategii recunosc că sistemele agentice introduc atât provocări noi de securitate cât si oportunităti noi de apărare, necesitând design arhitectural si practici de implementare distincte de sistemele traditionale bazate pe servicii. en_US
dc.description.abstract This thesis investigates security implications in LLM-powered backend systems by examining ar chitectural differences between service-based and agentic frameworks. The research demonstrates how architectural design choices shape distinct security profiles, how established LLM attack vectors mani fest differently across these paradigms, and validates theoretical predictions through security testing of a multi-agent implementation. Theoretical analysis reveals that service-based frameworks maintain application-controlled execu tion flow, allowing validation during development, whereas agentic frameworks grant LLMs autonomous decision-making, introducing unpredictable tool combinations, goal manipulation, and cascade effects in multi-agent architectures. Vulnerabilities pose amplified risks in agentic contexts: prompt injection can hi jack agent goals, multi-turn jailbreaks exploit extended context, and data extraction threatens confidentiality across multiple tool executions. Security testing of a Google ADK multi-agent implementation demonstrates that direct, single-turn attacks achieve only 7% success rates, while multi-turn gradual escalation attacks achieve 47% success, underscoring the critical role of execution strategy. The most significant vulnerability is susceptibility to multi-turn boundary erosion, where gradual escalation across conversation turns compromises safety mechanisms designed for single-message evaluation. Information disclosure through intermediate agent steps represents a second critical vulnerability, with many successful attacks relying on visibility of agent responses and tool invocations that should be masked in production deployments. Multi-agent defense-in depth mechanisms proved effective, with subsequent agents blocking malicious requests even when initial agents were bypassed. Mitigation strategies address the unique security challenges of agentic systems through architectural design that masks intermediate steps and hides internal details, defense-in-depth through layered agent security mechanisms, and operational monitoring to detect boundary erosion attacks. These strategies acknowledge that agentic systems introduce both new security challenges and new defense opportunities, requiring architectural design and deployment practices distinct from traditional service-based systems. en_US
dc.language.iso en en_US
dc.publisher Universitatea Tehnică a Moldovei en_US
dc.rights Attribution-NonCommercial-NoDerivs 3.0 United States *
dc.rights.uri http://creativecommons.org/licenses/by-nc-nd/3.0/us/ *
dc.subject mitigation strategies en_US
dc.subject Ai Agent Frameworks en_US
dc.subject backend systems en_US
dc.title Security implications and mitigation strategies for Ai Agent Frameworks in backend systems en_US
dc.type Thesis en_US


Files in this item

The following license files are associated with this item:

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States

Search DSpace


Browse

My Account