Securing Generative AI Models: Best Paper Award at ICAIC 2025

The IEEE Chicago Section is proud to spotlight an outstanding achievement by our very own Conference Chair, Advait Patel. At the 4th IEEE International Conference on AI in Cybersecurity (ICAIC 2025), held from February 5-7 at the University of Houston, Texas, USA, a pioneering research paper titled Securing Cloud AI Workloads: Protecting Generative AI Models from Adversarial Attacks received the Best Paper Award—a testament to its significant contributions to the field.

Authored by Advait Patel (Broadcom), along with the co-authors, the paper addresses one of the most pressing concerns in the AI and cybersecurity landscape: safeguarding generative AI models deployed in cloud environments.

ICAIC Best Paper Award certificate

Addressing AI Vulnerabilities in the Cloud

Generative AI models, known for their ability to create realistic images, text, and media, are widely utilized across industries such as healthcare, finance, and autonomous systems. However, their deployment in cloud environments exposes them to unique security threats, particularly adversarial attacks, where malicious actors subtly modify inputs to manipulate model outputs. Such vulnerabilities can lead to severe real-world consequences, from misdiagnosed medical conditions to security risks in autonomous vehicles.

The award-winning research delves into three primary adversarial threats:

  • Evasion Attacks – Altering input data to deceive AI models into incorrect classifications.
  • Poisoning Attacks – Manipulating training datasets to compromise model integrity.
  • Inference Attacks – Extracting sensitive information from AI systems.

These attacks pose substantial risks in cloud-based deployments, where shared resources, multi-tenancy, and open APIs create additional security concerns.

Innovative Defense Mechanisms

To combat these threats, the paper introduces and evaluates multiple defense strategies, including:

  • Adversarial Training – Exposing AI models to adversarial examples during training to enhance robustness.
  • Defensive Distillation – Smoothing decision boundaries to mitigate attack impact.
  • Cloud-Specific Security Measures – Implementing encrypted communications, robust authentication, and continuous anomaly monitoring to safeguard AI workloads in multi-tenant environments.
  • AI Explainability and Transparency – Utilizing Explainable AI (XAI) to enhance threat detection and build trust in AI-generated outputs.

Implications for the Future of AI Security

The paper not only provides a comprehensive security framework for cloud-hosted generative AI models but also lays the groundwork for future advancements in AI security. The authors discuss quantum-resistant encryption, decentralized AI security, and federated learning as emerging research directions to fortify AI applications against evolving threats.

This recognition at ICAIC 2025 underscores the importance of interdisciplinary collaboration in advancing AI security and ensuring the safe deployment of AI-driven technologies in real-world applications.

IEEE Chicago Section’s Commitment to AI and Cybersecurity

As a hub for cutting-edge research and technological innovation, the IEEE Chicago Section takes immense pride in the accomplishments of our Conference Chair, Advait Patel. His leadership and dedication to the field of AI security continue to inspire the IEEE community and reinforce IEEE’s mission to foster technological advancements for the benefit of society.

We extend our congratulations to Advait Patel and his co-authors for their remarkable achievement and look forward to further groundbreaking developments in AI security and cloud computing.

For more details on ICAIC 2025 and award-winning research, visit the IEEE Chicago Section website.

Link to the Research Paper – https://ieeexplore.ieee.org/document/10848877