Mapping Trust in LLMs

Mapping Trust in LLMs

Bridging the Gap Between Theory and Practice in AI Trustworthiness

This research provides a comprehensive bibliometric analysis of 2,006 publications to establish a framework for operationalizing trustworthiness in Large Language Models.

  • Identifies four key dimensions of LLM trustworthiness: reliability, transparency, fairness, and ethical alignment
  • Reveals the disconnect between theoretical discussions and practical implementation of trust mechanisms
  • Proposes actionable approaches to enhance trust in LLM deployments across various domains
  • Establishes a foundation for security standards in the rapidly evolving LLM landscape

For security professionals, this research offers critical insights into establishing accountability frameworks and implementing trust-enhancing techniques essential for responsible AI deployment in enterprise environments.

Mapping Trustworthiness in Large Language Models: A Bibliometric Analysis Bridging Theory to Practice

78 | 124