IBM Research releases differential privacy library that works with machine learning
The open-source repository is unique in that most tasks can be run with only a single line of code, according to the company.
Differential privacy has become an integral way for data scientists to learn from the majority of their data while simultaneously ensuring that those results do not allow any individual’s data to be distinguished or re-identified.
To help more researchers with their work, IBM released the open-source Differential Privacy Library. The library “boasts a suite of tools for machine learning and data analytics tasks, all with built-in privacy guarantees,” according to Naoise Holohan, a research staff member on IBM Research Europe’s privacy and security team.
“Our library is unique to others in giving scientists and developers access to lightweight, user-friendly tools for data analytics and machine learning in a familiar environment–in fact, most tasks can be run with only a single line of code,” Holohan wrote in a blog post on Friday.
“What also sets our library apart is our machine learning functionality enables organizations to publish and share their data with rigorous guarantees on user privacy like never before.”
SEE: Data Circuit Installation or Change Checklist (TechRepublic Premium)
In an interview, Holohan explained that differential privacy has become so popular that for the first time in its 230-year history, the US Census will use differential privacy to keep the responses of citizens confidential when the data is made available.
Chris Sciacca, communications manager at IBM Research, added that the 2020 Census was a good example of how differential privacy can be used for any large data sets where you can do statistical analysis.
“Healthcare data would be another area that it would be interesting for. Any large data sets where you want to keep the data anonymous but you don’t want to add so much noise to it that it’s useless. So here you’re just adding a little bit of noise where you can still get statistical anomalies to look at trends in large data sets,” Sciacca said.
Differential privacy allows data collectors to use mathematical noise to anonymize information, and IBM’s library is special because it’s machine learning functionality enables organizations to publish and share their data with rigorous guarantees on user privacy.
“Originally, when we started looking at the space of open-source software and differential privacy, we noticed that there was a big gap in the market in terms of being able to do machine learning with differential privacy easily. There is a lot of work done in the literature that all the algorithms have been studied and made differentially private and solutions have been presented but there was no single repository or single library to go to do machine learning with differential privacy,” he said.
“We decided to build this library that, using existing packages in Python, allows you to build on top of them, and then you can do machine learning with differential privacy guarantees built-in. A lot of the commands you can execute in a single line of code, so it’s very user friendly. It’s easy to use and it can be integrated easily within scripts people have so there isn’t a lot of extra effort required.”
Last year, Google released its open-source differential privacy library and executives spoke about how they use it for a variety of their services. If you’ve ever looked at Google Maps and seen that fun chart of times when a business will be the busiest, you can thank differential privacy for it.
Differential privacy allows Google to anonymously track data about when most people eat at a certain restaurant or shopped at a popular store and in 2014, they used it to improve their Chrome browser as well as Google Fi.
Companies like Apple and Uber use versions of differential privacy to optimize their services while protecting the data of users.
Holohan said the IBM repository is already being used extensively for experimentation and to see what effect differential privacy has on machine learning algorithms. Academic institutions and bloggers are using the software to show how differential privacy works and he added that the library is being used internally at IBM to look at the impact of differential privacy on various applications.
“It has applicability to basically any application of data so that gives a very good opportunity to do a lot of work in a lot of different areas. We have focused on machine learning because the application of privacy-preserving protocols to machine learning fits very well and machine learning is very prevalent in any use of data,” he said.
“The next step is going to be allowing data scientists and analysts to be able to do a lot of statistical analysis easily with differential privacy and our library is the first or a few steps along that path.”