Where Is The Future Of Artificial Intelligence?

Where Is The Future Of Artificial Intelligence?
Where Is The Future Of Artificial Intelligence?
Photo by Shutterstock

Artificial intelligence and machine learning bring new vulnerabilities while bringing benefits. This article describes the methods used by several companies to minimize risk.

When companies adopt new technologies, security is often placed in a secondary position. It seems more important to provide new products or services to customers as quickly as possible at the lowest cost.

Artificial intelligence (AI) and machine learning (ML) provide the same opportunities for vulnerabilities and misconfigurations as early technological advances, but also have their own unique risks. As companies begin to undergo digital transformation driven by artificial intelligence, these risks may become even greater. Edward Raff, chief scientist at Booz Allen Hamilton, said: “Don’t rush into this field.”

Compared with other technologies, artificial intelligence and machine learning require more data and more complex data. Algorithms developed by mathematicians and data scientists come from research projects. Raf said that in the scientific community, it was only recently that they began to recognize the safety problems of artificial intelligence.

Cloud platforms usually have to handle a large number of workloads, which adds another level of complexity and vulnerability. Not surprisingly, cyber security is the most worrying risk for AI adopters. A Deloitte survey released last month showed that 62% of adopters believe that cybersecurity risks are the main concern, but only 39% said they are prepared to deal with these risks.

To make the problem even more complicated, network security is one of the primary functions used by artificial intelligence. Jeff Loucks, executive director of the Deloitte Technology, Media and Telecom Center, said that the more experienced companies are in artificial intelligence, the more they worry about cybersecurity risks.

In addition, even more experienced companies have not followed basic security practices, such as complete auditing and testing of all AI and ML projects. Loucks said the company is currently not doing very well in implementing these aspects.

AI and ML’s demand for data brings risks

AI and ML systems require three sets of data:

• Training data to build predictive models

• Test data to evaluate the operation of the model

•Operate data when the model is put into use

Although real-time transaction or operational data is obviously a valuable corporate asset, it is easy to overlook the training and testing data pool that also contains sensitive information.

Many principles used to protect data in other systems can be applied to AI and ML projects, including anonymization, tokenization, and encryption. The first step is to ask if you need data. When preparing AI and ML projects, collect all possible data and see what can be done.

Focusing on business results can help companies limit the data they collect to what they need. John Abbatico, chief technology officer of Othot, which analyzes student data for educational institutions, said that the data science team is very eager for data, and when processing student data, they clearly stated that highly sensitive PII (personally identifiable information) It is not required and should never be included in the data provided to their team.

Of course, mistakes can happen. For example, customers sometimes provide sensitive personal information, such as social security numbers. This information will not improve the performance of the model, but it will bring additional risks. Abatico said his team has developed a procedure to identify PII, remove it from all systems, and notify customers of errors.

Artificial intelligence systems also need scenario data, which may greatly expand the company’s exposure risk. Suppose an insurance company wants to better grasp the driving habits of its customers. It can purchase shopping, driving, location, and other data sets that can be easily cross-correlated and matched with customer accounts. This new, exponentially growing data set is more attractive to hackers, and if it is breached, it will cause greater damage to the company’s reputation.

Artificial Intelligence Security Design:

One company that has a lot of data to protect is online file sharing platform Box. Box uses AI to extract metadata and improve search and classification capabilities. Box CISO Lakshmi Hanspal (Lakshmi Hanspal) said that Box can extract terms, renewals and pricing information from the contract. Most of Box’s customer content categories are either user-defined categories or are completely ignored. They sit on a mountain of data that may be useful for digital transformation.

Hanspar said that protecting data is an important issue for Box, and the same data protection standards apply to artificial intelligence systems, including training data. Box builds trust and maintains trust.

This means that all systems, including new artificial intelligence projects, are built around core data security principles, including encryption, logging, monitoring, authentication, and access control. Hanspar pointed out that digital trust is inherent to their platform, and they put it into practice.

Box has a secure development process for both traditional code and new AI and ML-supported systems. Hanspar said: “We are in line with ISO industry standards in developing security products. Security in design is built-in, and there are checks and balances, including penetration testing and red teams.”

Mathematicians and data scientists usually do not worry about potential vulnerabilities when writing AI and ML algorithm code. When companies build AI systems, they will learn from existing open source algorithms, use commercial “black box” AI systems, or build their own AI systems from scratch.

For open source, it is possible for an attacker to embed malicious code, or the code contains vulnerabilities or vulnerable dependencies. Proprietary commercial systems also use open source code, as well as new code that corporate customers usually cannot view.

Reverse Attack Is A Major Threat:

AI and ML systems are usually a combination of open source libraries created by non-security engineers and newly written code. In addition, there are no standard best practices for writing secure AI algorithms. Considering the shortage of security experts and data scientists, there are fewer experts in these two areas.

AI and ML algorithms are one of the biggest potential risks, and they are also one of the long-term threats that Booz Allen Hamilton’s Raff is most worried about, which may leak training data to attackers. He said: “There are some reverse attacks that allow the artificial intelligence model to provide you with information about itself and the training it has received. If it is trained on PII data, you can let the model leak this information to you. The actual PII may be exposed.”

Raff said that this is an area that is being actively researched, and it is also a huge potential pain point. Some tools can protect training data from reverse attacks, but they are too expensive. He said: “We know how to stop this threat, but doing so will increase the cost of training models by 100 times. This is not an exaggeration, so no one will do this.”

You can’t make sure the things you can’t explain are safe

Another area of ​​research is interpretability. Today, many AI and ML systems, including AI and ML-supported tools provided by many major network security vendors, are “black box” systems. YL Ventures’ CISO Sounil Yu said: “Vendors have not built interpretability in it. In terms of security, being able to explain what’s happening is a fundamental component. If I can’t explain why this happened, how can I? Remedy?”.

For companies that build their own AI or ML systems, when a problem occurs, they can go back to the training data or the algorithm used to solve the problem. Yu pointed out that if you are building from someone else, you have no idea what the training data is.

It’s Not Just Algorithms That Need To Be Protected:

The artificial intelligence system is more than just a natural language processing engine, or just a classification algorithm, or just a neural network. Even if these parts are completely secure, the system must still interact with users and the backend platform.

Does the system use strong authentication and the principle of least privilege? Is the connection to the back-end database secure? How is the connection with third-party data sources? Is the user interface resilient to injection attacks?

Another source of human insecurity is unique to artificial intelligence and machine learning projects: data scientists. Otto’s Abatiko said that good data scientists experiment with data and come up with insightful models. However, when it comes to data security, experiments can lead to dangerous behavior. After using the data, they may tend to move the data to an unsafe location or delete the sample data set. Othot invested in SOC II certification early on. These controls help implement strong data protection practices throughout the company, including when moving or deleting data.

Peter Herzog, product manager of the artificial intelligence agency Urvin AI and co-founder of the international non-profit security research organization ISECOM, said: “The fact is that the biggest risk in most artificial intelligence models everywhere is Not in artificial intelligence, the problem lies with people. There are almost no artificial intelligence models without security issues, because people decide how to train them, people decide what data to include, people decide what they want to predict and predict, and how much information people decide to expose .”

Another security risk specific to AI and ML systems is data poisoning, where an attacker enters information into the system, forcing the system to make inaccurate predictions. For example, an attacker may trick the system into thinking that the malware is safe by providing examples of legitimate software with similar malware indicators to the system.

Raf said: “This is a matter of great concern to most companies. At present, I am not aware of any artificial intelligence systems being attacked in real life. In the long run, this is a real threat, but now attackers use it to escape The classic tools of antivirus software are still effective, so they don’t need to be more fancy.”

Avoid Bias And Model Drift:

When AI and ML systems are used for corporate security, for example, for user behavior analysis, monitoring network traffic, or checking data leakage, deviations and model drift may create potential risks. Training data sets that quickly become obsolete can make organizations vulnerable, especially as they increasingly rely on artificial intelligence for defense. Enterprises need to continuously update the model to make updating the model a continuous matter.

In some cases, the training data can be automated. For example, adjusting the model to adapt to changing weather patterns or supply chain delivery schedules can help make it more reliable over time. When the information source involves malicious actors, the training data set needs to be carefully managed to avoid poisoning and manipulation.

Companies are already dealing with algorithms that create ethical issues, such as when facial recognition or recruitment platforms discriminate against women or minorities. When biases creep into algorithms, it can also cause compliance issues, or, in the case of self-driving cars and medical applications, it can lead to death.

Just as algorithms can inject deviations into predictions, they can also be used to control deviations. For example, Othot helps universities achieve optimized class sizes or achieve financial goals. Othot’s Abbatico said that creating models without proper constraints can easily lead to bias. “It takes more effort to review bias. Adding diversity-related goals helps model and understand goals, and helps offset bias. If diversity goals are not included as a constraint, bias can easily be affected.”

The Future Of Artificial Intelligence Is In The Cloud:

AI and ML systems require large amounts of data, complex algorithms, and powerful processors, which can be expanded when needed. All major cloud providers are scrambling to provide data science platforms that put everything in one convenient place. This means that data scientists do not need to wait for IT to configure servers for them. They only need to go online, fill out a few forms, and start a business.

According to a Deloitte AI survey, 93% of companies are using some form of cloud-based AI. Deloitte’s Loucks said: “This makes it easier for us to get started.” Then, these projects will become operational systems, and as the scale expands, configuration problems will multiply. With the latest services, centralized, automated configuration and security management dashboards may not be available, and companies must write their own or wait for suppliers to accelerate the pace to fill the gaps.

This can be a problem when the people using these systems are citizen data scientists or theoretical researchers, and they do not have a strong background in security. In addition, vendors have always introduced new features first, and then security features. This can be a problem when the system is deployed quickly and then scales faster. We have seen this situation in IoT devices, cloud storage and containers.

Raff said that AI platform vendors are becoming more aware of this threat and have learned from their mistakes. He said: “I see that considering the historical’safety last’ mentality, the plan to incorporate safe content is much more active than we originally expected. The ML community is more concerned about this, and the delay may be even longer. short.”

Deloitte (Deloitte) AI co-head Irfan Saif agrees, especially when it comes to major cloud platforms that support AI workloads for large enterprises. In terms of the evolution of network security capabilities, they may be more mature than previous technologies.

Artificial Intelligence Project Safety Checklist:

The following checklist to help ensure the safety of artificial intelligence projects is taken from Deloitte’s “The State of Artificial Intelligence in the Enterprise” (3rd edition):

• Maintain an official list of all AI implementations

• Align AI risk management with broader risk management efforts

• There is an executive responsible for the risks associated with artificial intelligence

• Conduct internal audits and tests

• Use external suppliers for independent audits and testing

•Training practitioners on how to recognize and solve the ethical issues surrounding artificial intelligence

•Cooperate with external parties to formulate reasonable artificial intelligence ethics

•Ensure that AI vendors provide unbiased systems

• Develop a policy or committee

Freelancer Blogger and Writer. I am now studying CSE at Chengdu University Of Technology. Feel free to contact with me.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store