About AICrypt

In recent years, the interplay between artificial intelligence (AI) and security is becoming more prominent and important. This comes naturally because of the need to improve security more efficiently. One specific domain of security that steadily receives more AI applications is cryptography. We already see how AI techniques can improve implementation attacks, attacks on PUFs, hardware Trojan detection, etc.

Besides AI's role in cryptography, we believe cryptography for AI to be an emerging and important topic. As we can see an increasing number of attacks on AI systems, one possible research direction could be to investigate which cryptographic techniques can be used to mitigate such threats.

We aim to gather researchers from academia and industry that work on various aspects of cryptography and AI to share their experience and discuss how to strengthen the collaboration. We are especially interested in exploring the transferability of techniques among various cryptographic applications and AI protection mechanisms. Finally, we will discuss the developments happening in the last year, i.e., from the previous AICRYPT event.

Submission

We encourage researchers working on all aspects of AI and cryptography to take the opportunity and use AICrypt to share their work and participate in discussions. The authors are invited to submit the extended abstracts using the EasyChair submission system.
Every accepted submission must have at least one author registered for the workshop. All submitted abstracts must follow the original LNCS format with a page limit of up to 2 pages. The abstracts should be submitted electronically in PDF format.

Important dates (AoE)

EXTENDED submission deadline!

Workshop paper submission deadline: Apr 20, 2022

previously Apr 7, 2022

Workshop paper notification: Apr 25, 2022

previously Apr 20, 2022

Workshop date: May 30, 2022

IACR LNCS

Registration

Registration is open!

Workshop registration goes through the Eurocrypt registration process.

Register here.

Keynotes

Cryptography, Cyber Security and Machine Learning: Interdisciplinary benefits

Najwa Aaraj, Technology Innovation Institute, United Arab Emirates

Cyber Security and machine learning have typically been separate disciplines. Moreover, Cryptography and machine learning have typically been separate disciplines. New cross-discipline research is needed to improve the three domains.

In this talk, we discuss how machine learning can be used as an enabler to advanced cryptography, privacy preserving protocols and cryptanalysis. We also discuss how efficient machine learning models can enable local inference and advanced vulnerability management on edge devices. The talk also covers how Neural Network algorithms and cryptographic cores will co-exist in future Neural Processor Units.

We cover the role of cryptography in securing Machine Learning models by (1) ensuring confidentiality of both data & model during training and classification; (2) protection of models from being tampered-with or introducing bias for profit or control; (3) protection against model poisoning; and (4) introducing cryptographic randomness in training Deep Neural Networks. This could help drive the adoption of AI in privacy-sensitive industries, including medicine and finance.

Dr Najwa Aaraj is the Chief Researcher at the Cryptography Research Centre at the Technology Innovation Institute (TII). Dr Aaraj leads the research and development of cryptographic technologies, including post-quantum cryptography software libraries and hardware implementations, lightweight cryptographic libraries, cryptanalysis, and applied machine learning for cryptographic technologies. She is also Acting Chief Researcher at TII’s Autonomous Robotics Research Centre.

Dr Aaraj earned a PhD with Highest Distinction focused on Applied Cryptography and Embedded Systems Security from Princeton University (USA). She has extensive expertise in applied cryptography, trusted platforms, security architecture for embedded systems, software exploit detection and prevention systems, and biometrics. She has over 15 years of experience with global firms, working in multiple geographies from Australia to the United States.

Before TII, Dr Aaraj was Senior Vice President at DarkMatter, a cyber-security leader based in the UAE. She was formerly at Booz & Company, where she worked with clients globally. She also held a Research Fellow position with the Embedded Systems Security Group at IBM T.J. Watson Security Research in NY, and with the Intel Security Research Group in Portland, Oregon, where she worked on Trusted Platform Modules, and NEC laboratories, in Princeton, NJ. Dr. Aaraj has authored 5 USPTO patents and more than 30 IEEE and ACM journal and conference articles. She also co-authored a Springer book on post quantum digital signatures. She delivered multiple keynotes and conference talks internationally and contributed to various multiple technical articles and reports. Dr Aaraj is the Chairman of the UAE AI Expert Group at the UAE Council for AI and Blockchain. She is also Adviser within the Strategic Advisory Group at Washington DC-based Paladin Capital Group (Cyber Venture Capital) and Adjunct Professor at the Mohamed Bin Zayed University of Artificial Intelligence (Machine Learning Research Group). In addition, she is an Adviser to multiple security and Machine Learning start-ups including New York-based Neutigers and Okinawa Institute of Science and Technology Graduate University. She is the recipient of the Wu prize for research excellence from Princeton University. She also received a Special Recognition award at the Arab Woman Awards 2021, held in partnership with the United Nations to recognise the notable achievements of the region’s women.

Federated Learning for Fun and Profit: Promises, Opportunities and Challenges

Ahmad-Reza Sadeghi, TU Darmstadt, Germany

In this talk we present our research and experience, also with industrial partners and government agencies, in utilizing Federating Machine Learning (FL) to enhance the security of large scale systems and applications, as well as the challenges we faced in building secure and privacy-preserving Federated Learning Systems.

Ahmad-Reza Sadeghi is a professor of Computer Science and the head of the System Security Lab at Technical University of Darmstadt, Germany. He has been leading several Collaborative Research Labs with Intel since 2012, and with Huawei since 2019. He has studied both Mechanical and Electrical Engineering and holds a Ph.D. in Computer Science from the University of Saarland, Germany. Prior to academia, he worked in R&D of IT-enterprises, including Ericsson Telecommunications. He has been continuously contributing to security and privacy research field. He was Editor-In-Chief of IEEE Security and Privacy Magazine, and currently serves on the editorial board of ACM TODAES, ACM TIOT, and ACM DTRAP.
For his influential research on Trusted and Trustworthy Computing he received the renowned German “Karl Heinz Beckurts” award. This award honors excellent scientific achievements with high impact on industrial innovations in Germany. In 2018, he received the ACM SIGSAC Outstanding Contributions Award for dedicated research, education, and management leadership in the security community and for pioneering contributions in content protection, mobile security and hardware-assisted security. In 2021, he was honored with Intel Academic Leadership Award at USENIX Security conference for his influential research on cybersecurity and in particular on hardware-assisted security.

Invited Talks

Return trip: From side-channel leakage to assembly code, lessons learned from applying deep learning

Ileana Buhan, Radboud University Nijmegen, The Netherlands

Given the threat of side-channel analysis, leakage assessment of a chip is very important for the semiconductor industry and has received a lot of attention in the past years. A significant number of studies show how to recover secrets by monitoring the algorithm's execution (side channels). Early attempts to create automated tooling and the recently increased efforts toward this purpose prove the appeal of leakage simulators. A leakage simulator translates a sequence of assembly instructions into a power trace. The challenge for the wide-scale adoption lies in the manual effort required to create a leakage simulator. ABBY is the first post-silicon leakage simulator, where we used deep learning to automate the profiling of the target.

In contrast to leakage simulators, side-channel disassemblers aim to extract the assembly instruction from side-channel information. The application for side-channel disassemblers is the detection of malware or reverse-engineering the firmware. Its appeal is stealth, as monitoring a side channel does not interfere with the execution of instructions.

This talk will discuss lessons learned when applying deep learning to building leakage simulators and side-channel disassemblers, looking at the case study for the ARM-Cortex M0 architecture.

Dr. Ileana Buhan is an assistant professor of cryptographic engineering in the Digital Security group. Her research interest focuses on developing tools to help designers of cryptographic algorithms develop secure implementations. She spent over 10 years in the security evaluation industry while at Riscure. She serves on several program committees of conferences that specialized on hardware security (TCHES, COSADE, CARDIS, FDTC, DATE, SPACE). She was also appointed general chair for CHES 2018 and program co-chair for CARDIS 2022.

Leaking AI: Threat of Physical Attacks on EdgeML Devices

Shivam Bhasin, Temasek Laboratories, Nanyang Technological University, Singapore

EdgeML combines the power of machine (deep) learning and edge (IoT) devices. Owing to its capability of solving difficult problems in sensor nodes and other resource constrained devices, EdgeML has seen adoption in variety of application domains like smart manufacturing, remote monitoring, smart homes etc However, being deployed on edge devices exposes machine/deep learning algorithms to range of new attacks specially physical attacks.

In this talk, we demonstrate practical physical attacks on EdgeML. First, we show how side-channel attacks can be used to reverse engineer architectures and parameters of deep learning models. These models are often proprietary with commercial value and contain information on sensitive training data. The feasibility of these attacks are shown both on standalone microcontrollers as well as commercial ML accelerators. Further, we demonstrate practical and low-cost cold boot based model recovery attacks on Intel Neural Compute Sticks 2 (NCS2) to recover the model architecture and weights, loaded from the Raspberry Pi with high accuracy. The proposed attack remains unaffected by the model encryption features of the NCS2 framework.

Dr. Shivam Bhasin is a Senior Research Scientist and Programme Manager (Cryptographic Engineering) at Centre for Hardware Assurance, Temasek Laboratories, Nanyang Technological University Singapore. He received his PhD in Electronics & Communication from Telecom Paristech in 2011, Advanced Master in Security of Integrated Systems & Applications from Mines Saint-Etienne, France in 2008. Before NTU, Shivam held position of Research Engineer in Institut Mines-Telecom, France. He was also a visiting researcher at UCL, Belgium (2011) and Kobe University (2013). His research interests include embedded security, trusted computing and secure designs. He has co-authored several publications at recognized journals and conferences. Some of his research now also forms a part of ISO/IEC 17825 standard.

High-throughput network intrusion detection based on deep learning

Nele Mentens, Leiden University, The Netherlands and KU Leuven, Belgium

The evolution of our digital society relies on networks that can handle an increasing amount of data, exchanged by an increasing number of connected devices at an increasing communication speed. With the growth of the online world, criminal activities also extend onto the Internet. Network Intrusion Detection Systems (NIDSs) detect malicious activities by analyzing network data. While neural network-based solutions can effectively detect various attacks in an offline setting, it is not straightforward to deploy them in high-bandwidth online systems. This talk elucidates why Field-Programmable Gate Arrays (FPGAs) are the preferred platforms for online network intrusion detection, and which challenges need to be overcome to develop FPGA-based NIDSs for Terabit Ethernet networks.

Nele Mentens is a professor at Leiden University and KU Leuven. Her research interests are in the field of configurable computing and hardware security. She was/is the PI in around 25 finished and ongoing research projects with national and international funding. She serves/served as a program committee member of renowned international conferences on security and hardware design. She was the general co-chair of FPL'17 and she was/is the program chair of FPL'20, CARDIS'20, RAW'21, VLSID'22 and DDECS'23. She is (co-)author in around 150 publications in international journals, conferences and books. She received best paper awards and nominations at CHES'19, Asian HOST'17 and DATE'16. Nele serves as an associate editor for IEEE TIFS, IEEE CAS Magazine, IEEE S&P, and IEEE TCAD.

Machine Learning-based Distinguishers

David Gerault, Technology Innovation Institute, United Arab Emirates

While the power of machine learning for side channel analysis has been known for a few years now, its use to build cryptographic distinguishers is more recent. In his seminal paper at Crypto’19, Aron Gohr successfully used a machine learning algorithm to build differential distinguishers and key recovery attacks competitive with the state of the art on the block cipher Speck32. Research work inspired by these results has been focused on two main directions: explaining them, and improving on them. The explanation part focuses on what such distinguishers are able to learn, as they can seemingly do better than usual analysis techniques: can we learn from them? In terms of improvements, the main question is can we do better than Gohr’s results on SPECK32? hat about other ciphers? In this presentation, we will attempt to answer these questions, though a tour d’horizon of recent research results around machine learning based distinguishers.

Dr. David Gerault is a senior cryptanalyst at the Technology Innovation Institute (TII). After obtaining his Phd. from University Clermont Auvergne (France) in 2018, he joined Nanyang Technological University (NTU) as a research scientist until 2020, and was a lecturer at University of Surrey until 2021, when he joined TII. His main research interest is the application of AI and optimisation tools to cryptanalysis, to facilitate the work of cryptographers. In this line of work, he (co)-authored multiple publications in both domain, in particular through his pioneer work on the use of Constraint Programming for cryptanalysis problems.

Contributed Talks

The More You Know: Improving Laser Fault Injection with Prior Knowledge

Marina Krček

Delft University of Technology, The Netherlands


A Partial Differential ML-Distinguisher and Bit Selection Mechanism for ML Differential Attacks

Amirhossein Ebrahimi Moghaddam1, Francesco Regazzoni2,3, and Paolo Palmieri1

School of Computer Science & IT, University College Cork1, University of Amsterdam, The Netherlands2, Università della Svizzera italiana, Switzerland3

Program

The program starts at 09:30 pm, CEST time (UTC + 2).

TIME
CEST (UTC+2)
SESSION/TITLE
09:00 - 09:30 Registration
09:30 - 09:45 Welcome and info
09:45 - 10:45 Keynote talk 1: Cryptography, Cyber Security and Machine Learning: Interdisciplinary benefits
Najwa Aaraj, Technology Innovation Institute, United Arab Emirates
10:45 - 11:10 Break
11:10 - 11:55 Invited talk: High-throughput network intrusion detection based on deep learning
Nele Mentens, Leiden University, The Netherlands and KU Leuven, Belgium
11:55 - 12:40 Invited talk: Leaking AI: Threat of Physical Attacks on EdgeML Devices
Shivam Bhasin, Temasek Laboratories, Nanyang Technological University, Singapore
12:40 - 13:05 The More You Know: Improving Laser Fault Injection with Prior Knowledge
Marina Krček, TU Delft, The Netherlands
13:05 - 14:15 Lunch
14:15 - 15:15 Keynote talk 2: Federated Learning for Fun and Profit: Promises, Opportunities and Challenges
Ahmad-Reza Sadeghi, TU Darmstadt, Germany
15:15 - 15:40 A Partial Differential ML-Distinguisher and Bit Selection Mechanism for ML Differential Attacks
Amirhossein Ebrahimi Moghaddam, School of Computer Science & IT, University College Cork
15:40 - 15:55 Break
15:55 - 16:40 Invited talk: Return trip: From side-channel leakage to assembly code, lessons learned from applying deep learning
Ileana Buhan, Radboud University Nijmegen, The Netherlands
16:40 - 17:25 Invited talk: Machine Learning-based Distinguishers
David Gerault, Technology Innovation Institute, United Arab Emirates
17:25 - 18:15 Panel: Is "AI for Cybersecurity" a Blessing or a Curse

Organizers

Sponsors

Golden Sponsor