What Makes AI Trustworthy? New Study Reveals the Key Factors

From self-driving cars to urban planning tools, new research from the University of Colorado Boulder outlines how AI can earn human trust by combining transparency, ethical design, and user collaboration to create intelligent systems we can truly rely on.

AI trustworthiness concept. A person stood next to an AI computer chip.

Study: Factors influencing human trust in intelligent built environment systems. Image Credit: SewCreamStudio/Shutterstock.com

AI’s integration into daily life is rapidly advancing, with self-driving taxis and smart systems transforming homes, transportation, and workplaces. However, widespread adoption depends on one fragile element, which is trust.

Trust is critical in determining whether people will rely on AI.

The study, conducted at the Connected Informatics and Built Environment Research (CIBER) Lab, explores how AI can earn this trust, addressing the lack of clear definitions around trust's components.

The researchers suggest that modern views on AI often reflect ancient instincts of in-group trust and out-group mistrust. When AI is created by corporations or governments seen as "outsiders," distrust can emerge.

Overcoming this requires designing AI systems that are not just intelligent but also trustworthy, factoring in social, ethical, and legal considerations. The study also links this concept to the idea of urban resilience - how cities can sustain and recover functionality through human-AI collaboration - where trust in intelligent systems is vital for effective policy and infrastructure decision-making.

Foundations of Trustworthy AI Systems

Establishing initial trust in an AI system requires a foundation built on more than just algorithmic proficiency. The research distinguishes between “trust,” a human behavioral response, and “trustworthiness,” an intrinsic property of the AI system that includes reliability, safety, transparency, and fairness.

Trust is a profoundly personal and subjective phenomenon, influenced by an individual's experiences, cultural beliefs, value system, and even neurobiological wiring. Consequently, a system that one person finds reliable may be met with skepticism by another. For AI developers, this means moving beyond a one-size-fits-all approach to consider the specific social and cultural norms, preferences, and technological literacy of the intended users.

The technical and ethical dimensions of trustworthiness (reliability, security, and transparency) form a crucial part of this foundation. An AI tool must perform its tasks accurately and consistently, and its failure must not result in harm to people, property, or the environment. It must also provide robust security against unauthorized access and protect user privacy.

Transparency is also essential in dismantling the "black box" perception of AI, where users cannot see how their data is used or how decisions are made. The paper references the US National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework (AI RMF 1.0), which outlines core functions - GOVERN, MAP, MEASURE, and MANAGE - to guide how organizations assess and improve AI trustworthiness across metrics such as validity, resilience, and explainability.

Finally, developers must strive to create ethical tools from the outset and establish methods to continuously measure and improve their tool’s trustworthiness after its launch, ensuring it remains aligned with user expectations and societal values. This continuous assessment reflects the study’s concept of “trust calibration,” the ongoing alignment between user perception and system performance over time.

Contextual Sensitivity, Ease of Use, and Trust Calibration

Trust is a dynamic relationship that evolves through interaction and context. The third pillar of trustworthy AI, therefore, is contextual sensitivity. An AI tool must be attuned to the specific problem it is designed to solve, incorporating as much situational information as possible to function reliably.

For instance, in a hypothetical scenario, an AI assistive tool named PreservAI is conceptualized to aid in the complex restoration of a historical building. Such a task involves balancing competing priorities like cost, energy efficiency, historical integrity, and safety. A trustworthy tool in this context would not operate in a vacuum. Instead, it would be designed to incorporate diverse stakeholder input, analyze nuanced trade-offs, and collaborate helpfully with human experts rather than seeking to replace their irreplaceable judgment.

In Behzadan’s case study, PreservAI draws on multiple machine learning models, including time-series forecasting, regression, and geospatial analysis, to inform sustainable retrofit recommendations for historic buildings, supported by continuous human oversight.

The fourth pillar focuses on the user experience and the critical process of feedback. A system must not only be efficient but also engaging and easy to use, minimizing errors and proactively addressing potential user frustrations. Behzadan’s framework also accounts for anthropomorphic factors such as how human-like AI interfaces can influence trust formation while cautioning against overreliance on superficial cues that may not reflect true system reliability.

Finally, the last pillar is the capacity for trust calibration and repair. Trust can be lost, as demonstrated in past public examples where AI systems failed to meet ethical or technical expectations. The study introduces the concept of “Zero Trust AI,” which emphasizes a “trust but verify” approach to ensure accountability through constant evaluation rather than blind reliance.

While some risk is inherent in sharing data with any AI system, this vulnerability is also what allows the systems to improve. When users engage meaningfully, the systems become more accurate, fair, and useful, creating a positive feedback loop where calibrated trust benefits both the human and the machine.

Conclusion

In the end, building trustworthy AI is simply about designing systems that people can rely on, understand, and feel comfortable using. It’s a complex process that blends human insight with technical strength, guided by strong ethics and real transparency. Behzadan’s model frames trust as both something we can measure - like how reliable or transparent a system is - and something we feel, like whether the system seems fair or respects our input.

Trust also isn’t something you build once and forget. It shifts over time, shaped by how people interact with the system, the context it’s used in, and how it responds when things go wrong. That’s why the idea of trust recalibration is so important; keeping the relationship between people and AI in sync, even as expectations or conditions change.

The study also makes a bigger point in the fact that trust needs to be built into the way we design technology for our cities and communities. As AI tools become part of everyday infrastructure, they should help create cities that are not only smart but also fair, sustainable, and centered on real human needs. The goal is to make AI that earns our trust and keeps it.

Journal Reference

 Behzadan, A., & Dabiri, A. (2025). Factors influencing human trust in intelligent built environment systems. AI And Ethics. DOI:10.1007/s43681-025-00813-6. https://link.springer.com/article/10.1007/s43681-025-00813-6

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Sources:

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2025, October 30). What Makes AI Trustworthy? New Study Reveals the Key Factors. AZoRobotics. Retrieved on October 30, 2025 from https://www.azorobotics.com/News.aspx?newsID=16230.

  • MLA

    Nandi, Soham. "What Makes AI Trustworthy? New Study Reveals the Key Factors". AZoRobotics. 30 October 2025. <https://www.azorobotics.com/News.aspx?newsID=16230>.

  • Chicago

    Nandi, Soham. "What Makes AI Trustworthy? New Study Reveals the Key Factors". AZoRobotics. https://www.azorobotics.com/News.aspx?newsID=16230. (accessed October 30, 2025).

  • Harvard

    Nandi, Soham. 2025. What Makes AI Trustworthy? New Study Reveals the Key Factors. AZoRobotics, viewed 30 October 2025, https://www.azorobotics.com/News.aspx?newsID=16230.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.