ASERC 1ST INTERNATIONAL CONFERENCE ON HEALTH, ENGINEERING, ARCHITECTURE AND AMTHEMATICS, İstanbul, Türkiye, 6 - 08 Haziran 2025, ss.157-165, (Tam Metin Bildiri)
Large Language Models (LMMs) have become an indispensable part of modern technology with their groundbreaking capabilities in natural language processing. These models have a wide range of applications, from virtual assistants to code development, but they also pose serious security and privacy threats. This study examines in detail the major security threats to LLMs: backdoor attacks, model inversion attacks, membership inference attacks, and data poisoning. Especially in sensitive data processing areas such as healthcare, finance and law, risks such as leakage of personal information and malicious content generation are of great concern. This study addresses various protection strategies against these threats such as data encryption, access control, attacker training and differential privacy. It also emphasizes that international legal regulations such as GDPR, HIPAA, CCPA and the EU Artificial Intelligence Act (AI Act) provide a critical framework for the responsible and ethical development and use of LLMs. As a result, it is stated that ensuring the security of LLMs will be possible not only with technical measures, but also with legal compliance, ethical principles and interdisciplinary collaboration.