This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
List Professionals Alphabetically
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z View All
Search Professionals
Site Search Submit
| 2 minute read

Choose your GenAI model providers, models, and use cases wisely

Generative AI (GenAI) vendors, models, and uses cases are not created equal. Model providers must be trusted to handle sensitive data. Models, like tools in a toolbox, may be better suited for some jobs than others. Use cases vary widely in risk.

When it comes to selection of GenAI model providers (e.g., tech companies and others offering models) and their models, due diligence is wise. For example, DeepSeek dominated headlines in early 2025 as a trendy pick for high performance and lower cost GenAI models. But not everyone is sold. A number of U.S. states and the federal government are reportedly implementing or considering bans because the models allegedly transfer user data to China, among other concerns. 

Before selecting a provider and model, it is important to learn where the provider is located; where data is transferred and stored; where and how the training data was sourced; compliance with the NIST AI risk management framework, ISO/IEC 42001:2023, and other voluntary standards; impact or risk assessments under the EU AI Act, Colorado AI Act, and other laws; guardrails and other safety features built into the model; and performance metrics of the model relative to planned use cases. This information may be learned from the “model card” and other documentation for each model, conversations with the provider, and other research. And, of course, the contractual terms governing the provider relationship and model usage are critical. Key issues include IP ownership, confidentiality and data protection, cybersecurity, liability, reps and warranties, and indemnification.

Once the appropriate provider and model are selected, the job is not done. Use cases must also be scrutinized. Even if a particular GenAI model is approved for use generally, what it is used for still matters (a lot). It may be relatively low risk to use an AI model for one purpose (e.g., summarizing documents), but the risk may increase for another purpose (e.g., autonomous resume screening). Companies should calibrate their risk tolerance for AI use cases, leaning on a cross-functional AI advisory committee. Use cases should be vetted to mitigate risks including loss of IP ownership, loss of confidentiality, hallucination and inaccuracies in outputs, IP infringement, non-unique outputs, and biased and discriminatory outputs and outcomes. If GenAI is already being used by employees in an ad hoc manner before a formal governance framework is implemented, identify such use though the advisory committee and other outreach, and prioritize higher-risk use cases for review and potential action.

Once enterprise risk tolerance is calibrated, AI usage policies and employee training should be rolled out. The policy and training should articulate which models and use cases are (and are not) permitted and explain the “why” behind the decisions to help contextualize important risks for employees. Policies should consider both existing laws and voluntary frameworks like NIST and ISO/IEC and should remain living documents subject to regular review and revision as the legal and technological landscape continue evolving rapidly. Employee training is not only a good idea, but may also be a legal mandate, e.g., under the “AI literacy” requirement of the EU AI Act for companies doing business in the EU.

Bottom line: all businesses and their employees will soon be using GenAI in day-to-day operations—if they are not already. To mitigate risk, carefully select your vendors, models, and uses cases, and implement policies and training reflecting enterprise risk tolerance.

Tags

intellectual property, artificial intelligence, regulatory, privacy data and cybersecurity, advertising marketing and promotions