In recent years, the interplay between artificial intelligence (AI) and security is becoming more prominent and important. This comes naturally because of the need to improve security more efficiently. One specific domain of security that steadily receives more AI applications is cryptography. We already see how AI techniques can improve implementation attacks, attacks on PUFs, hardware Trojan detection, etc. Besides AI's role in cryptography, we believe cryptography for AI to be an emerging and important topic. As we can see an increasing number of attacks on AI systems, one possible research direction could be to investigate which cryptographic techniques can be used to mitigate such threats. We aim to gather researchers from academia and industry that work on various aspects of cryptography and AI to share their experience and discuss how to strengthen the collaboration. We are especially interested in exploring the transferability of techniques among various cryptographic applications and AI protection mechanisms. Finally, we will discuss the developments happening in the last year, i.e., from the previous AICRYPT event.
We encourage researchers working on all aspects of AI and cryptography to take the opportunity and use AICrypt to share their work and participate in discussions.
The authors are invited to submit the extended abstracts using the EasyChair submission system.
Every accepted submission must have at least one author registered for the workshop. All submitted abstracts must follow the original LNCS format with a page limit of up to 2 pages. The abstracts should be submitted electronically in PDF format.
EXTENDED submission deadline!
Workshop paper submission deadline: Apr 20, 2022
previously Apr 7, 2022
Workshop paper notification: Apr 25, 2022
previously Apr 20, 2022
Workshop date: May 30, 2022
Workshop registration goes through the Eurocrypt registration process.
Cyber Security and machine learning have typically been separate disciplines. Moreover, Cryptography and machine learning have typically been separate disciplines. New cross-discipline research is needed to improve the three domains.
In this talk, we discuss how machine learning can be used as an enabler to advanced cryptography, privacy preserving protocols and cryptanalysis. We also discuss how efficient machine learning models can enable local inference and advanced vulnerability management on edge devices. The talk also covers how Neural Network algorithms and cryptographic cores will co-exist in future Neural Processor Units.
We cover the role of cryptography in securing Machine Learning models by (1) ensuring confidentiality of both data & model during training and classification; (2) protection of models from being tampered-with or introducing bias for profit or control; (3) protection against model poisoning; and (4) introducing cryptographic randomness in training Deep Neural Networks. This could help drive the adoption of AI in privacy-sensitive industries, including medicine and finance.
Dr Najwa Aaraj is the Chief Researcher at the Cryptography Research Centre at the Technology Innovation Institute (TII). Dr Aaraj leads the research and development of cryptographic technologies, including post-quantum cryptography software libraries and hardware implementations, lightweight cryptographic libraries, cryptanalysis, and applied machine learning for cryptographic technologies. She is also Acting Chief Researcher at TII’s Autonomous Robotics Research Centre.
Dr Aaraj earned a PhD with Highest Distinction focused on Applied Cryptography and Embedded Systems Security from Princeton University (USA). She has extensive expertise in applied cryptography, trusted platforms, security architecture for embedded systems, software exploit detection and prevention systems, and biometrics. She has over 15 years of experience with global firms, working in multiple geographies from Australia to the United States.
Before TII, Dr Aaraj was Senior Vice President at DarkMatter, a cyber-security leader based in the UAE. She was formerly at Booz & Company, where she worked with clients globally. She also held a Research Fellow position with the Embedded Systems Security Group at IBM T.J. Watson Security Research in NY, and with the Intel Security Research Group in Portland, Oregon, where she worked on Trusted Platform Modules, and NEC laboratories, in Princeton, NJ. Dr. Aaraj has authored 5 USPTO patents and more than 30 IEEE and ACM journal and conference articles. She also co-authored a Springer book on post quantum digital signatures. She delivered multiple keynotes and conference talks internationally and contributed to various multiple technical articles and reports. Dr Aaraj is the Chairman of the UAE AI Expert Group at the UAE Council for AI and Blockchain. She is also Adviser within the Strategic Advisory Group at Washington DC-based Paladin Capital Group (Cyber Venture Capital) and Adjunct Professor at the Mohamed Bin Zayed University of Artificial Intelligence (Machine Learning Research Group). In addition, she is an Adviser to multiple security and Machine Learning start-ups including New York-based Neutigers and Okinawa Institute of Science and Technology Graduate University. She is the recipient of the Wu prize for research excellence from Princeton University. She also received a Special Recognition award at the Arab Woman Awards 2021, held in partnership with the United Nations to recognise the notable achievements of the region’s women.
Given the threat of side-channel analysis, leakage assessment of a chip is very important for the semiconductor industry and has received a lot of attention in the past years. A significant number of studies show how to recover secrets by monitoring the algorithm's execution (side channels). Early attempts to create automated tooling and the recently increased efforts toward this purpose prove the appeal of leakage simulators. A leakage simulator translates a sequence of assembly instructions into a power trace. The challenge for the wide-scale adoption lies in the manual effort required to create a leakage simulator. ABBY is the first post-silicon leakage simulator, where we used deep learning to automate the profiling of the target.
In contrast to leakage simulators, side-channel disassemblers aim to extract the assembly instruction from side-channel information. The application for side-channel disassemblers is the detection of malware or reverse-engineering the firmware. Its appeal is stealth, as monitoring a side channel does not interfere with the execution of instructions.
This talk will discuss lessons learned when applying deep learning to building leakage simulators and side-channel disassemblers, looking at the case study for the ARM-Cortex M0 architecture.
Dr. Ileana Buhan is an assistant professor of cryptographic engineering in the Digital Security group. Her research interest focuses on developing tools to help designers of cryptographic algorithms develop secure implementations. She spent over 10 years in the security evaluation industry while at Riscure. She serves on several program committees of conferences that specialized on hardware security (TCHES, COSADE, CARDIS, FDTC, DATE, SPACE). She was also appointed general chair for CHES 2018 and program co-chair for CARDIS 2022.
EdgeML combines the power of machine (deep) learning and edge (IoT) devices. Owing to its capability of solving difficult problems in sensor nodes and other resource constrained devices, EdgeML has seen adoption in variety of application domains like smart manufacturing, remote monitoring, smart homes etc However, being deployed on edge devices exposes machine/deep learning algorithms to range of new attacks specially physical attacks.
In this talk, we demonstrate practical physical attacks on EdgeML. First, we show how side-channel attacks can be used to reverse engineer architectures and parameters of deep learning models. These models are often proprietary with commercial value and contain information on sensitive training data. The feasibility of these attacks are shown both on standalone microcontrollers as well as commercial ML accelerators. Further, we demonstrate practical and low-cost cold boot based model recovery attacks on Intel Neural Compute Sticks 2 (NCS2) to recover the model architecture and weights, loaded from the Raspberry Pi with high accuracy. The proposed attack remains unaffected by the model encryption features of the NCS2 framework.
Dr. Shivam Bhasin is a Senior Research Scientist and Programme Manager (Cryptographic Engineering) at Centre for Hardware Assurance, Temasek Laboratories, Nanyang Technological University Singapore. He received his PhD in Electronics & Communication from Telecom Paristech in 2011, Advanced Master in Security of Integrated Systems & Applications from Mines Saint-Etienne, France in 2008. Before NTU, Shivam held position of Research Engineer in Institut Mines-Telecom, France. He was also a visiting researcher at UCL, Belgium (2011) and Kobe University (2013). His research interests include embedded security, trusted computing and secure designs. He has co-authored several publications at recognized journals and conferences. Some of his research now also forms a part of ISO/IEC 17825 standard.
The evolution of our digital society relies on networks that can handle an increasing amount of data, exchanged by an increasing number of connected devices at an increasing communication speed. With the growth of the online world, criminal activities also extend onto the Internet. Network Intrusion Detection Systems (NIDSs) detect malicious activities by analyzing network data. While neural network-based solutions can effectively detect various attacks in an offline setting, it is not straightforward to deploy them in high-bandwidth online systems. This talk elucidates why Field-Programmable Gate Arrays (FPGAs) are the preferred platforms for online network intrusion detection, and which challenges need to be overcome to develop FPGA-based NIDSs for Terabit Ethernet networks.
Nele Mentens is a professor at Leiden University and KU Leuven. Her research interests are in the field of configurable computing and hardware security. She was/is the PI in around 25 finished and ongoing research projects with national and international funding. She serves/served as a program committee member of renowned international conferences on security and hardware design. She was the general co-chair of FPL'17 and she was/is the program chair of FPL'20, CARDIS'20, RAW'21, VLSID'22 and DDECS'23. She is (co-)author in around 150 publications in international journals, conferences and books. She received best paper awards and nominations at CHES'19, Asian HOST'17 and DATE'16. Nele serves as an associate editor for IEEE TIFS, IEEE CAS Magazine, IEEE S&P, and IEEE TCAD.
Ahmad-Reza Sadeghi is a professor of Computer Science at the TU Darmstadt, Germany. He is the head of the Systems Security Lab at the Cybersecurity Research Center of TU Darmstadt. Since 2012 he has also been leading three several Intel Collaborative Research Centers on Secure Mobile and Embedded Computing, Trustworthy Autonomous Systems, and since 2020 on Private AI. Prof. Sadeghi holds a Ph.D. in Computer Science and MScs in Electrical Engineering as well as Industrial Engineering. Prior to academia, he has been working in R&D of Telecommunications enterprises, amongst others Ericsson.
He has been continuously contributing to security and privacy as well as systems research community. He was Editor-In-Chief of IEEE Security and Privacy Magazine, served on the editorial board of the ACM Transactions on Information and System Security (TISSEC), and ACM Books, ACM TODAES, ACM TIOT and ACM DTRAP. For his influential research on Trusted and Trustworthy Computing he received the renowned German “Karl Heinz Beckurts” award. This award honors excellent scientific achievements with high impact on industrial innovations in Germany.
In 2018, Prof. Sadeghi received the ACM SIGSAC Outstanding Contributions Award for dedicated research, education, and management leadership in the security community and for pioneering contributions in content protection, mobile security and hardware-assisted security. SIGSAC is ACM’s Special Interest Group on Security, Audit and Control.
Delft University of Technology, The Netherlands
School of Computer Science & IT, University College Cork1, University of Amsterdam, The Netherlands2, Università della Svizzera italiana, Switzerland3