Last week, Intel and Microsoft brought together nearly 100 security and Artificial Intelligence (AI) experts to discuss new standards for Homomorphic Encryption (HE), which is emerging as a leading method to protect privacy in machine learning and cloud computing. The HE standards workshop took place on Intel’s Santa Clara, California campus. Following the first meeting in October, 2018, Intel and Microsoft initiated the founding of the HomomorphicEncryption.org group.
As more data is collected and used to power AI systems, concerns about privacy are on the rise. Casimir Wierzynski from the office of the CTO of AI Products Group at Intel, said that Intel is collaborating with Microsoft Research and Duality Technologies on standardizing HE, “to unlock the power of AI while still protecting data privacy.”
Fully homomorphic encryption, or simply homomorphic encryption, refers to a class of encryption methods envisioned by Rivest, Adleman, and Dertouzos already in 1978, and first constructed by Craig Gentry in 2009. Homomorphic encryption differs from typical encryption methods in that it allows computation to be performed directly on encrypted data without requiring access to a secret key. The result of such a computation remains in encrypted form, and can at a later point be revealed by the owner of the secret key.
It allows AI computation on encrypted data, thus enabling data scientists and researchers to gain valuable insights without decrypting or exposing the underlying data or models. This is particularly useful in instances where data may be sensitive – such as with medical or financial data. Homomorphic encryption also enables training models directly on encrypted data, without exposing its content. Such encryption would enable researchers to operate on data in a secure and private way, while still delivering insightful results.