

What we are observing is the widespread adoption of what we call the "advisor model" in AI integration. Instead of completely delegating decision-making authority to algorithms, progressive organizations are developing systems that:
This approach addresses one of the persistent challenges in AI adoption: the trust deficit. By positioning AI as an advisor rather than a substitute, companies have found that employees and stakeholders are more receptive to these technologies, particularly in industries where decisions have significant human impact.
Goldman Sachs represents a prime example of this trend. The bank has implemented a "GS AI assistant" for about 10,000 employees, with the goal of extending it to all knowledge workers by 2025.
As Chief Information Officer Marco Argenti explains, "The AI assistant really becomes like talking to another GS employee." The system does not automatically execute financial transactions, but engages with investment committees through detailed briefings that enhance human decision-making.
Measurable outcomes:
In the health sector, Kaiser Permanente has implemented the "Advance Alert Monitor" (AAM) system, which analyzes nearly 100 items from patient health records every hour, providing clinicians with 12 hours of advance notice before clinical deterioration.
Documented impact:
Crucially, the system does not make automatic diagnoses but ensures that physicians retain decision-making authority by benefiting from AI that can process thousands of similar cases.
Explicable AI (XAI) is crucial for building trust and confidence when implementing AI models in production. Successful organizations develop systems that communicate not only conclusions but also the underlying reasoning.
Proven benefits:
Confidence scores can help calibrate people's confidence in an AI model, allowing human experts to apply their knowledge appropriately. Effective systems provide:
The rate of improvement of the model can be calculated by taking the difference between AI performance at different times, allowing for continuous system improvement. Leading organizations implement:
This hybrid approach elegantly solves one of the most complex issues in AI implementation:accountability. When algorithms make autonomous decisions, questions about accountability become complicated. The advisor model maintains a clear chain of accountability while leveraging the analytical power of AI.
77 percent of companies are using or exploring the use of AI in their businesses, while 83 percent of companies say AI is a top priority in their business plans.
Investment in AI solutions and services is expected to generate a cumulative global impact of $22.3 trillion by 2030, accounting for about 3.7 percent of global GDP.
Despite the high adoption rate, only 1 percent of business leaders describe their generative AI implementations as "mature," highlighting the importance of structured approaches such as the advisor model.
Competitive advantage increasingly belongs to organizations that can effectively match human judgment with AI analytics. It is not simply a matter of having access to sophisticated algorithms, but of creating organizational structures and workflows that facilitate productive human-AI collaboration.
Leadership plays a critical role in shaping collaborative scenarios between humans and machines. Companies that excel in this area report significantly higher satisfaction and adoption rates among employees working together with AI systems.
Problem: Only 44% of people globally feel comfortable with companies using AI.
Solution: Implement XAI systems that provide understandable explanations of AI decisions.
Problem: 46% of leaders identify skills gaps in the workforce as a significant barrier to AI adoption.
Solution: Structured training programs and leadership that encourages AI experimentation.
The most advanced AI technologies in Gartner's 2025 Hype Cycle include AI agents and AI-ready data, suggesting an evolution toward more sophisticated and autonomous advisor systems.
Strategic AI contributors will see 4x the ROI by 2026, highlighting the importance of investing in the advisor model now.
The advisor model represents not only a technology implementation strategy, but a fundamental perspective on the complementary strengths of human and artificial intelligence.
In embracing this approach, companies are finding a path that captures the analytical power of AI while preserving the contextual understanding, ethical reasoning and stakeholder trust that remain uniquely human domains.
Companies that prioritize explicable AI will gain a competitive advantage by driving innovation while maintaining transparency and accountability.
The future belongs to organizations that can effectively orchestrate human-AI collaboration. The advisor model is not just a trend-it is the blueprint for success in the era of enterprise artificial intelligence.
AI Decision Support Systems (AI-DSS) are technological tools that use artificial intelligence to assist humans in making better decisions by providing relevant information and data-driven recommendations.
Unlike full automation, advisor systems ensure that humans retain ultimate control over decision-making processes, with AI systems acting as advisors. This approach is particularly valuable in strategic decision-making scenarios.
The advisor model addresses the trust deficit in AI, with only 44 percent of people feeling comfortable with companies using AI. By maintaining human control, organizations gain greater acceptance and adoption.
The main areas include:
Strategic AI contributors see 2x the ROI compared to simple users, with metrics that include:
Key challenges include:
To build trust:
Projections indicate that by 2026, strategic AI contributors will see 4x the ROI. The evolution to more sophisticated agentic systems will still maintain the advisor approach, with greater autonomy but still under human supervision.
Immediate steps:
Primary sources: McKinsey Global Institute, Harvard Business Review, PubMed, Nature, IEEE, Goldman Sachs Research, Kaiser Permanente Division of Research