Read time: 4 minutes
Information security
As part of the vetting and contracting process, customers may also consider a vendor’s data security models with regards to the AI tools. There is potential for hacking an AI system and causing issues such as system manipulation, data poisoning, and extraction attacks. System manipulation includes providing the AI system with malicious inputs, causing the output data to be inaccurate. Data poisoning is the act of modifying the input data while in transit or at rest, so it returns incorrect classifications. An example of this is a bad actor manipulating the training data to teach the AI model anything it wants such as the model seeing good software code as malicious code and vice versa. Data extraction attacks place the entire AI system at risk by generating a back door in the training data to gain access to the AI model itself. To avoid these potential harmful scenarios, customers may request that vendors agree to certain data security requirements such as penetration tests, detailed review processes and testing of any AI-generated source code, and access controls for personnel who will be supervising the AI model. Having security policies and procedures are critical to protect the integrity and confidentiality of the AI model and the data.
Service levels and key performance indicators
Many customers seeking to use services involving an AI model, may seek certain quality, accuracy, or other benchmarks in connection with the service. If the AI model provides inaccurate or non-beneficial data, then the AI model could be potentially useless. A customer may request a vendor to provide assurances that the AI model will perform as it is supposed to perform in the form of service levels and key performance indicators. One type of service level a customer may request is an accuracy SLA, which requires that the output data generated by an AI model is accurate X percentage of the time. For a vendor, this could be difficult since a customer could be providing poor input data to begin with. For example, if a customer provides input data that says only cows can be brown, then when the AI model sees a cow that is black and white, it could classify that cow as another animal that the AI model knows can be black and white. At the same time, a vendor should be continuously improving and training its AI model to know that cows can be assorted colors and committing that the AI model will provide accurate output data.
Termination rights
When parties terminate a relationship, there are common features in data licenses agreement that dictate the requirements to wrap up the termination or expiration such as destruction of any confidential information that was shared and continued use of data after termination. Many of these standard provisions may be difficult with respect to models, derivatives, and the like. For example, vendors may include in the contract the right to continue to use the confidential information that was part of the input data for further training. If this right is included, a provision stating deletion of confidential information could conflict with a license right regarding such confidential information that survives termination. This could also cause potential leakage of a customer’s confidential information to other customers who will have access to such confidential information by virtue of access to the output data of the model and/or derivatives. This is why reviewing licensing terms in conjunction with termination rights is key. A customer may also request the ability to continue to use the output and derived data long after the contract terminates. Under a typical data license, the right to use the software and the associated data is revoked after termination. However, this may not be feasible for a customer if the output data and derived data is incorporated into a customer’s dataset. To avoid a potential infringement claim, a customer may want to include a continued use clause which allows it to use the data already provided without restriction. In turn, a vendor may want to potentially limit the customer’s ability to use its data after termination or expiration and without restriction so that any potential uses by the customer does not erode its business value. For example, if a customer has the output data and derived data in its database and turns around and sells such data to others, this could potentially undercut the vendor’s business. Vendors and customers should review termination rights carefully to ensure their future interests are protected.
- Purchasers of AI systems should consider clauses in contracts to protect themselves from new AI risks
- This includes data security measures, service levels and key performance indicators, and termination clauses