How Caring Leadership Can Solve Today’s Crisis of Trust in AI (7 Min Read)

AI (Artificial intelligence) is evolving faster than our ability to understand, regulate, or even interpret it. Each day brings new capabilities, some astonishing and some alarming; yet the most important ingredient for building an AI-powered future is slipping through our fingers: trust.

Trust is not a technical feature; it is a leadership choice. And right now, the world is facing a crisis of trust in AI that threatens not only innovation but global stability, next generation well-being and future.

Let’s see where are today with AI stats.

The growth of AI and agentic AI over the past several years has been exponential. Globally, AI-related patents have surged from 3,833 in 2010 to 122,511 in 2023, a nearly 32-fold increase (Stanford AI Index). The number of companies innovating in AI now exceeds 171,000, including over 46,000 startups (StartUs Insights), while AI research publications have reached 2.6 million globally, up roughly 22% in just one year (TechKV). Agentic AI systems capable of autonomous decision-making, is projected to grow from $5.2 billion in 2024 to nearly $200 billion by 2034, a 44% CAGR (Market.us), and enterprise adoption is rising rapidly, with 79% of organizations using AI agents at some level and projections that over one-third of enterprise software will include agentic AI by 2028 (MintMCP). The number of startups focused specifically on agentic AI has jumped from ~1,200 in 2023 to over 3,000 today, a nearly 2.5-fold increase in just a couple of years (Market.us). These numbers demonstrate that AI and agentic AI are not only growing; they are transforming industries and societies at an unprecedented pace.

Despite remarkable breakthroughs, public confidence in AI is declining. Misinformation spreads at machine speed, creating synthetic voices, images, and narratives that can reshape public opinion in minutes. Misaligned incentives drive companies to release technologies faster than society can understand their impact. Fragmented global standards push countries toward competing AI models instead of building a shared ethical foundation; these divisions make trust ever more fragile.

What leaders are saying right now reflects this moment. Stanford AI pioneer Dr. Fei-Fei Li, a professor of computer science at Stanford University and co-director of the Human-Centered AI Institute, has urged that AI governance be grounded in evidence and reality rather than sensational extremes, calling for policy rooted in “science, not science fiction” to address AI’s real capabilities and risks.

Technology alone cannot solve this crisis; governance alone cannot repair trust. The missing piece is human leadership. My Business Caring Formula: Business Family Builder, Change Engine, Responsible, Inclusive, Passionate, Humorous, Positive is not abstract; it is essential to ethical and trusted AI.

As I write in my book The Business Caring Formula: “Caring becomes a type of discipline rooted in our core values, the kind we would ideally discover early in our lives with our families or someone special, who instill a sense of trust, consideration, honesty, and respect for one another.” Trust is the foundation of every relationship in business: without it, strategies fail, teams falter, and innovation stalls. Building trust is not optional: it is the base upon which lasting success is built.

Leaders must ensure AI is explainable, accountable, and built with human impact in mind. They must prioritize safety over speed, workforce inclusion over automation alone, and long-term vision over short-term gains.

Trust is infrastructure: without it, innovation collapses.

If we want AI to accelerate human progress instead of human anxiety, we must shift from speed-first innovation to trust-first innovation. This is not idealism; it is the competitive advantage of the future. The companies, institutions, and nations that embed trust into their AI systems will lead the next era of progress because trust scales faster than code.

Global networks like the New Julfa Network Armenian Trade global network of 350+ years (17-19th centuries), reviving now through Digital Julfa Network demonstrate that trust has always been the most resilient form of infrastructure. For more than 300 years, Armenian merchant communities built trusted trade routes across continents through governance, ethics, and reputation. As we build digital trade systems and AI ecosystems today, those same principles apply as trust is infrastructure.

The crisis of trust in AI is real, but it is also a historic opportunity. The companies, nations, and leaders who embed trust-first leadership into AI will define the future. AI is not just a tool; it is a mirror reflecting our choices.

Recent thought leadership and research reinforce that technology alone cannot restore trust in AI: leadership must drive it. Industry experts from Forbes emphasize that trust in AI reflects executive values and governance, not just system performance. Forbes McKinsey notes that leaders play a central role in shaping ethical AI adoption by integrating transparency, feedback, and principled decision-making into deployment strategies. McKinsey & Company PwC highlights the need for leaders to embody trust and transparency in both human and agentic AI adoption. PwC Ethical leadership frameworks, as outlined by AI Journal, show that prioritizing explainability, risk assessment, and accountability strengthens stakeholder confidence. The AI Journal Academic research also finds that leaders who communicate openly about AI, its risks, and its limitations help sustain trust in organizations using these technologies. e-SSBM Repository.

My perspective

While Forbes and PwC correctly emphasize leadership, governance, and trust as the foundation for responsible AI, there is a critical gap in how agentic AI is currently being deployed across corporations.

Agentic AI is increasingly being used as a cost-cutting instrument to reduce human talent, rather than as a tool to improve service quality and human outcomes. In many organizations, autonomous agents and automated workflows have replaced skilled professionals in customer service, healthcare coordination, and financial support, stripping away empathy, judgment, and accountability. This approach erodes trust instead of building it.

Agentic AI should not be measured by how many jobs it eliminates, but by how effectively it augments human capability and restores value to services that have been hollowed out by excessive automation. Customers with complex financial needs, medical conditions, or life-altering decisions cannot and should not be served by robots alone. These situations require human understanding, ethical judgment, and contextual intelligence that no autonomous system can fully replicate. Leadership must have the courage to reverse failed automation where it has damaged service quality, reinstate human oversight in critical processes, and redesign agentic AI as a support layer, not a replacement for care, responsibility, and trust.

This is where caring leadership becomes decisive. Trust will not be rebuilt through faster agents or cheaper workflows; it will be rebuilt when leaders intentionally use agentic AI to serve people better, not remove them from the equation.

The crisis of trust in AI is real, but it is also a historic opportunity. The companies, nations, and leaders who embed trust-first leadership into AI will define the future. AI is not just a tool; it is a mirror reflecting our choices.

After reviewing the research and sources referenced in this article, the path forward became clear to me. To solve today’s crisis of trust in AI, leaders must apply the seven ingredients of the Business Caring Formula: Business Family Builder, Change Engine, Responsible, Inclusive, Passionate, Humorous, and Positive.

When leadership is caring, intentional, and human-centered, AI becomes a force for progress rather than a source of fear.

Share the Post:

Related Posts

This Headline Grabs Visitors’ Attention

A short description introducing your business and the services to visitors.