You don’t need us to tell you that artificial intelligence has overtaken the world over the last few years. Every company is working to adapt their technology strategy to include AI. It’s because AI is poised to change how everyone works, from artists and academics to retailers and distributors.
As we’ve covered in other articles on AI, generative AI has already paid off for many distributors who were quick to see its potential. Generative AI is the technology that underlies popular tools like ChatGPT, which can generate text, images, and other types of content in response to a user’s prompt. AI-powered tools allow distributors to increase efficiency through automation and quickly analyze reams of data to make smart, nimble business decisions.
But as Scientific American put it, “Artificial intelligence programs, like the humans who develop and train them, are far from perfect.”
If you expect AI to be a business panacea that allows your company to go into perpetual-growth autopilot, you will not only be disappointed, but you could also end up in quite a bit of trouble. Getting the most out of generative AI means having a realistic view of what it can and can’t do yet and its inherent limitations.
In 2023, a law firm in New York found itself in hot water after submitting a legal brief that included six “bogus” cases, complete with fake quotes and citations. As it turned out, a lawyer with 30 years of experience had used ChatGPT to perform his research for the brief, and as he later told the court, he was “unaware that its content could be false.”
This lack of awareness can be deadly for any distributor who hopes to use generative AI tools to boost their business. Before adopting an AI-powered tool for your business, you must understand that even after its recent evolution, AI can make mistakes.
Tools like ChatGPT can be prone to what the Center for Science in the Public Interest (CSPC) calls “hallucinations,” which happen when an AI bot “puts words, names, and ideas together that appear to make sense but actually don’t belong together.” According to one survey in the Journal of the Association for Computing Machinery, these hallucinations can be caused by source-reference divergence, which may occur when data is collected imperfectly.
In other words, humans still matter, and it’s up to humans to do the due diligence necessary to get the most out of AI.
A generative AI tool is only as smart as the data it’s trained on, and a smart AI tool is limited by the quality of the data it’s told to analyze. If the input is garbage, the output will also be garbage.
For example, a study by Johns Hopkins and the Georgia Institute of Technology found that robots trained on algorithms that make stereotypical assumptions about groups of people will then make biased or prejudiced decisions.
Generative AI models are also prone to plagiarism, which can lead to legal trouble, as in the case of Microsoft and OpenAI, sued by the New York Times in late 2023 over Bing Chat and ChatGPT’s unauthorized use of the paper’s articles.
As your company takes steps toward adopting AI, it’s also important to consider how those tools will access and analyze your business data. A 2023 McKinsey report on AI opens by saying,“If your data isn’t ready for generative AI, your business isn’t ready for generative AI.” Without clean, accessible data, your AI tools will risk running into a “garbage out” situation.
Distributors grappling with data quality challenges face a significant threat to effectively leveraging generative AI. The pitfalls of bad inventory data, duplicate customer records, and inaccurate product information complicate day-to-day operations and intensify the "Garbage In, Garbage Out" predicament, potentially derailing AI-driven initiatives.
Bad Inventory Data: Misjudgments in stock data can lead to overstocking, tying up capital unnecessarily, or stockouts, resulting in lost sales and diminished customer trust. These errors impede the efficacy of AI systems designed for demand forecasting, highlighting the necessity of rigorous inventory data management as a strategic cornerstone for exploiting AI's predictive analytics.
Duplicate Customer Records: The duplicate records in customer databases pose another significant challenge. These duplications, arising from manual entry mistakes or disparate systems' integration, skew AI-powered customer behavior analysis and personalized marketing efforts. The outcome is often a waste of resources and customer annoyance, which could tarnish established relationships. Thoroughly cleaning and merging customer data is a critical prerequisite for any AI deployment targeting customer interactions.
Inaccurate Product Information: Discrepancies in product descriptions, specifications, or pricing can misguide AI recommendations, leading to order inaccuracies and customer dissatisfaction. AI models depend on current and correct product data to function optimally, making it imperative for distributors to ensure data integrity to facilitate accurate recommendations and flawless order processing.
These challenges underscore the importance of robust data governance practices within the distribution sector. It's not merely about preparing data for AI adoption; it's about committing to a strategic investment in data quality that safeguards the future. This includes cleaning data and maintaining its accuracy and consistency across all business areas.
Businesses should boldly embrace AI, but that doesn’t mean they can skimp on security. AI tools are software, so they require IT infrastructure to keep the data they use safe.
As lawyer Fred Mendelsohn points out in Industrial Distribution, “Hackers who gain access to a generative AI system can manipulate it to generate harmful or misleading content, disrupt business operations, or steal sensitive information. Protecting AI systems from cyber threats requires constant vigilance and investment in security measures.”
Distributors that don’t invest in protecting those systems risk exposing the large amounts of data they analyze to bad actors.
The good news is that the systems that distributors have come to know and trust provide strong protection. Microsoft’s Azure Cloud Platform and Amazon Web Services, among other cloud providers, invest heavily in technology. As distributors build capabilities and functionality within these systems, they can rely on the strong investment and protection many enterprises worldwide leverage. In any case, understand the terms and conditions for each provider who may be using Generative AI, or have access to sensitive data.
So, how can you avoid pitfalls beyond simply being aware of them? Using a broadly applicable model like ChatGPT, you can prevent hallucinations and biased outputs by providing it with a specific persona.
For example, if you’re prepping to meet with a potential new vendor or customer to negotiate a contract, you could ask ChatGPT to help you come up with a negotiation strategy based on the particulars of your situation and add instructions like, “Act like an Expert in Principled Negotiation, employing the framework from Fisher and Ury’s Getting to Yes.” With more targeted prompting, the AI model will avoid source-reference divergences and biased frameworks.
When you’re ready to embrace more advanced uses of AI for business, McKinsey advises that your business moves to fix its data architecture’s foundations, prioritizing “fixes that provide the greatest benefit to the widest range of use cases, such as data-handling protocols for personally identifiable information (PII), since any customer-specific generative AI use case will need that capability.”
You may need to upgrade that infrastructure to handle all the data processing generative AI needs to perform.
McKinsey also suggests investing time and effort into prompt engineering, which requires companies to “manage the integration of knowledge graphs or data models and ontologies” into prompts to develop prompt formats to ensure quality outputs.
ProfitOptics can help you avoid those pitfalls by successfully preparing your business to use AI. Schedule a call to speak to our experts today.