Call for Papers  

Papers in the main technical program must describe high-quality, original research. Topics of interest include all aspects of artificial intelligence and software engineering including, but not limited to


Track 1: Content Generation and Tampering Content Detection
▪ Media Manipulation Detection and Localization
▪ Deepfake Forgery Detection and Mitigation
▪ Authenticity Assessment of AI-Generated Media (Images, Videos, Audio, Text)
▪ Approximate Reasoning

Track 2: Traceability and Provenance Analysis of Synthetic Content
▪ Source Device Attribution of Synthetic Media
▪ Generative Model Attribution (GANs, Diffusion Models)
▪ Identity Provenance in AI-Generated Content

Track 3: Security of Large Language Models
▪ Adversarial Attacks and Defense Strategies for LLMs
▪ Jailbreaking Attacks and Prompt Injection Mitigation
▪ Security Risks in Knowledge Distillation Pipelines
▪ Monitoring Malicious Adaptation of Open-Source LLMs
▪ Detection and Mitigation of Hallucinations in LLMs
▪ Content Integrity Assurance in Multimodal LLMs

Track 4: Data Privacy Protection
▪ Privacy Leakage in Federated Learning Systems
▪ Privacy-Preserving Data Anonymization Techniques
▪ Secure Multi-Party Computation Frameworks
▪ Ethical Implications of Synthetic Data Generation
▪ Countermeasures Against AI-Driven Data Reconstruction

Track 5: AI-Driven Cybersecurity
▪ AI-Powered Threat Detection and Incident Response
▪ Automated Vulnerability Discovery and Exploitation
▪ AI in Offensive and Defensive Network Operations
▪ Collaborative Threat Intelligence Sharing via AI
▪ Quantum Computing Threats to AI Security Protocols

Track 6: Automated Adversarial Testing and Validation
▪ Generation and Application of Adversarial Examples
▪ AI-Based Attack Simulation and Penetration Testing
▪ Automated Verification of AI System Robustness
Track 7: Security of AI-Enabled IoT Systems
▪ Privacy and Integrity of IoT Data Streams
▪ Defense Mechanisms for AI-Enhanced IoT Networks

Track 8: Biometric Security and AI
▪ Privacy-Preserving Biometric Data Management
▪ Anti-Spoofing Techniques for Biometric Systems
▪ AI-Augmented Biometric Authentication

Track 9: Ethical AI and Regulatory Compliance
▪ Accountability in AI Decision-Making Processes
▪ Bias Detection and Fairness in Algorithmic Systems
▪ Legal and Compliance Frameworks for AI Deployment
▪ Inclusive Algorithm Design for Diverse Populations
▪ Moral Responsibility in Autonomous Decision Systems

Track 10: Emerging Trends in AI Security
▪ Security Challenges of Cutting-Edge AI Technologies
▪ Novel Defense Paradigms for Future AI Systems
▪ Strategic Roadmap for Long-Term AI Security

Track 11: Explainable and Transparent AI
▪ Standardization of Black-Box Model Interpretability
▪ High-Stakes Applications of Transparent AI (e.g., Legal, Financial)
▪ Quantifying User Trust in AI-Driven Decisions
▪ Balancing Explainability and Model Efficiency
▪ Cross-Cultural Adaptation of Explainability Tools

Track 12: Adversarial Robustness in AI Systems
▪ Generation and Detection of Adversarial Perturbations
▪ Impact Analysis of Adversarial Attacks on AI Models
▪ Enhancing System Robustness and Fault Tolerance
▪ Threat Modeling for GAN-Enabled AI Systems
▪ Real-World Adversarial Attack Scenarios
▪ Game-Theoretic Approaches to Defense Mechanisms
▪ Vulnerability Assessment of Multimodal AI Models

Track 13: Public Engagement and AI Literacy
▪ Global Educational Frameworks for AI Security
▪ Digital Platforms for Civic Participation in AI Governance