MIR专题"Special Issue on Security and Ethics of Generative Artificial Intelligence"现公开征集原创稿件,截稿日期为2025年6月30日。欢迎赐稿!
专题征稿
Special Issue on Security and Ethics of Generative Artificial Intelligence
专题简介
Generative Artificial Intelligence (Generative AI), powered by Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs), is rapidly reshaping the landscape of artificial intelligence. State-of-the-art models such as GPT-4, DeepSeek, Claude, and DALL-E 3 demonstrate significant progress in generative capabilities, enabling breakthroughs in creative content synthesis, logical inference, automated decision-making, and domain-specific applications. However, the accelerated deployment of these systems has also exposed critical security vulnerabilities and ethical concerns, including the risks of misuse, deepfakes, phishing scams, data privacy breaches, and model security. It has raised unexpected concerns from individuals, organizations, communities and even nations. As Generative AI continues to evolve and integrate into various applications and sectors, the need for robust mechanisms to ensure the safety, trustworthiness, and ethical use of generative models has become increasingly urgent. This Special Issue is dedicated to exploring the latest technical advancements that enhance the security, reliability, and ethical deployment of Generative AI technologies.
征稿范围
MIR is pleased to announce a Special Issue. The Special Issue focuses on all aspects of security and ethical considerations within generative artificial intelligence systems, with a special emphasis on LLMs and MLLMs. We invite original research exploring the challenges related to the safety, privacy, and ethical implications of models, data, content, learning, and evaluation in generative models. We also welcome survey, dataset and benchmark papers in these general areas. Specifically, we encourage scholars from all disciplines to submit contributions related to Security and Ethics topics including, but not limited to:
1) Reliability, Trustworthy and Security for Generative AI;
2) Jailbreak Attacks and Defenses;
3) Adversarial Robustness;
4) Backdoor Learning;
5) Machine Unlearning;
6) Hallucination Correction;
7) Detection of AI-Generated Content;
8) Detection of Misinformation and Deepfakes;
9) Watermarking AI Generated Content and Fingerprinting Models;
10) Federated Learning and Privacy;
11) Bias and Fairness;
12) Explainability and Transparency;
13) New evaluation datasets and performance benchmark.
投稿指南
1) 截稿日期:2025年6月30日
2) 投稿地址(已开通):
https://mc03.manuscriptcentral.com/mir
投稿时,请在系统中选择:
“Step 6 Details & Comments: Special Issue and Special Section---Special Issue on Security and Ethics of Generative Artificial Intelligence”.
3) 投稿及同行评议指南:
Full-length manuscripts and peer reviewing will follow the MIR guidelines. For details: https://www.springer.com/journal/11633
Please address inquiries to mir@ia.ac.cn.
客座编委
Prof. Jing Dong, Institute of Automation, Chinese Academy of Sciences, China.
E-mail: jdong@nlpr.ia.ac.cn
Assoc. Prof. Matteo Ferrara, University of Bologna, Italy.
E-mail: matteo.ferrara@unibo.it
Prof. Ran He, University of Chinese Academy of Sciences, China.
E-mail: rhe@nlpr.ia.ac.cn
Prof. Dacheng Tao, Nanyang Technological University, Singapore.
E-mail: dacheng.tao@gmail.com
Prof. Philip H. S. Torr, University of Oxford, UK.
E-mail: philip.torr@eng.ox.ac.uk
Prof. Rama Chellappa, Johns Hopkins University, US.
E-mail: rchella4@jhu.edu