The Iceberg Trust Model: Development Methodology and Framework Specification

Daniel Glinz

2026-04-20

The Iceberg Trust Model: Development Methodology and Framework Specification

This document presents the complete scientific methodology underpinning the Iceberg Trust Model (Glinz, 2015, 2025, 2026), a multi-level conceptual framework for digital trust. It traces every design decision from theoretical initiation through literature selection, grounded-theory coding of the literature, construct formation, and cue derivation. The goal is full transparency: every construct, every cue, and every architectural choice is justified with explicit academic grounding, ensuring the framework meets the standards of reproducibility and falsifiability expected of peer-reviewed research.

The Iceberg Trust Model is published under CC BY-SA 4.0 at iceberg.digital and operationalized as a structured knowledge graph within the Validant platform. The term “framework” is used throughout this document to denote the multi-level classification scheme of constructs and cues. Future formalization as an ontology in the Gruber (1993) sense, including OWL/RDFS representation and OntoClean (Guarino & Welty, 2002) metaproperty analysis, is identified as follow-on work (see Section 18).


1. Research Objective and Scope

1.1 Problem Statement

Digital trust is a prerequisite for the adoption of digital services, AI systems, and platform economies. Yet trust research is fragmented across disciplines: organizational psychology studies trustworthiness beliefs (Mayer, Davis, & Schoorman, 1995), information systems research examines institutional and technology-mediated trust (McKnight, Choudhury, & Kacmar, 2002; Gefen, Karahanna, & Straub, 2003), economics analyses signaling under information asymmetry (Spence, 1973), governance scholarship addresses regulatory frameworks (NIST, 2023; EU AI Act, 2024), and human-computer interaction investigates trust in AI and autonomous systems (Glikson & Woolley, 2020; Schlicker et al., 2025). No single framework integrates these perspectives into a coherent, operationalizable ontology that captures both the visible signals organizations can design and the hidden psychological foundations that determine whether those signals produce trust.

1.2 Research Question

How can the multi-disciplinary determinants of digital trust be systematically organized into a unified conceptual framework that (a) distinguishes observable trust cues from latent psychological constructs, (b) is grounded in established trust theory across disciplines, (c) is operationalizable for assessment and measurement, and (d) accommodates the specific trust dynamics of AI systems?

1.3 Contribution

The Iceberg Trust Model addresses this question through four contributions:

  1. A systems-theoretic foundation rooted in Luhmann’s (1979) conceptualization of trust as a mechanism for reducing social complexity, extended through five distinct trust conceptualizations (R1 through R5) that capture the full spectrum of digital trust relationships.
  2. A four-layer architecture (Agency, Engineering, Governance, Institutional) derived from axial coding of the trust literature, reflecting the disciplinary boundaries that emerged inductively from the data.
  3. A two-zone framework that distinguishes visible trust cues (above the waterline) from hidden psychological constructs (below the waterline), grounded in McKnight et al.’s (2002) trust typology, Hoffmann, Lutz, and Meckel’s (2014) SEM-based cue study, and Schlicker et al.’s (2025) Trustworthiness Assessment Model (TrAM).
  4. A comprehensive L2 cue taxonomy of operationalized trust indicators, each with definition, rationale, trust-building description, and trust-eroding description, derived through a combination of practitioner input, structured literature review, and axial coding of the literature corpus (Wolfswinkel et al., 2013).

2. Theoretical Foundations

2.1 Trust as Complexity Reduction

The framework is anchored in Luhmann’s (1979) conceptualization of trust as a functional mechanism for reducing social complexity: trust enables action under uncertainty by bracketing risks that cannot be fully evaluated. Three consequences shape the framework design:

  1. Trust is relational. It exists between a trustor and a trustee within a specific context. The R1-R5 framework (Section 2.2) specifies the five relationship types the framework addresses.
  2. Trust is a substitute for information. Trust fills the gap between what can be verified and what must be assumed. This grounds the distinction between above-waterline cues (verifiable signals) and below-waterline constructs (assumptions without verification).
  3. Trust presupposes risk. Without vulnerability, trust is unnecessary (Rousseau et al., 1998, p. 395). This led to placing perceived risk as an environmental moderator at the waterline rather than as a construct (Section 8.1).

Luhmann’s distinction between personal trust (Vertrauen, requiring familiarity) and system trust (Systemvertrauen, operating through institutional guarantees) maps onto the model’s vertical axis: above-waterline constructs facilitate personal trust through visible cues, while below-waterline constructs capture the institutional and dispositional foundations that enable trust without personal familiarity.

2.2 Five Trust Conceptualizations: The R1-R5 Framework

Drawing on established trust theory across disciplines (Mayer, Davis, & Schoorman, 1995; McKnight, Choudhury, & Kacmar, 2002; Lukyanenko, Maass, & Storey, 2022; Lankton, McKnight, & Tripp, 2015), and as articulated in Glinz (2025, 2026), the R1-R5 framework distinguishes five conceptualizations of trust in digital contexts. Each conceptualization represents a different type of relationship between trustor and trustee, with different signal structures, different psychological mechanisms, and different design implications. The external disciplinary sources listed in the following table carry the theoretical weight; the Glinz (2025, 2026) articulation organizes them for operationalization. The R1-R5 framework served as the organizing scaffold for literature selection (Section 3), ensuring that the framework’s evidence base covers the full spectrum of digital trust relationships rather than privileging any single disciplinary perspective.

Conceptualization Trust Relationship Definitional Core Foundational Sources
R1: Trust in Persons and Organizations Human-to-human or human-to-organization Willingness to be vulnerable to another party’s actions based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party Mayer, Davis, & Schoorman (1995); Giffin (1967); Deutsch (1976); Fukuyama (1995); Rousseau et al. (1998)
R2: Trust in Digital Economy Agents Human-to-digital-intermediary An implicit contractual relationship that stabilizes uncertain behavioral expectations by creating obligations that the trustee will not exploit the trustor’s vulnerability Ripperger (2003); McKnight, Cummings, & Chervany (1998); McKnight et al. (2002); Gefen et al. (2003); Koufaris & Hampton-Sosa (2004); Dinev & Hart (2006)
R3: Trust in AI Systems Human-to-AI-system A mental and physiological process in which a person considers the characteristics of the AI system as grounds for acts of trust Lukyanenko et al. (2022); Glikson & Woolley (2020); Muir (1994); Lee & See (2004); Hoff & Bashir (2015); Choung, David, & Ross (2022); Thiebes, Lins, & Sunyaev (2021); Schlicker et al. (2025)
R4: Trust in the Interface Human-to-interface Relational intelligence and sociocultural design for trust-promoting interaction between humans and digital artifacts Bickmore & Cassell (2001); Zierau, Engel, Satzger, & Schwabe (2021); Vossing, Kuhl, Lind, & Satzger (2022); Van Pinxteren, Wetzels, Ruger, Pluymaekers, & Wetzels (2019)
R5: Trust and Personality Individual-to-self (dispositional) Personality traits and individual differences that determine predisposition towards trust, independent of the trustee’s characteristics Riedl (2022); McKnight et al. (2002); Szalma & Taylor (2011); Hoffmann et al. (2014)

Why five conceptualizations matter for framework design: A trust framework built exclusively from R1 sources would overrepresent interpersonal trustworthiness beliefs (ability, benevolence, integrity) while neglecting the technical infrastructure (R3), interface design (R4), and individual difference (R5) dimensions that are critical in digital contexts. The R1-R5 framework ensures that every construct and every L2 cue in the Iceberg Trust Model is traceable to at least one conceptualization, and that no conceptualization is systematically underrepresented.

mindmap
  root((Digital Trust<br/>R1 to R5))
    R1 Persons and Organizations
      Mayer Davis Schoorman 1995
      Rousseau et al 1998
      Blau 1964
      Spence 1973
    R2 Digital Economy Agents
      Ripperger 2003
      McKnight et al 2002
      Gefen et al 2003
      Dinev and Hart 2006
      Hoffmann Lutz Meckel 2014
    R3 AI Systems
      Lukyanenko et al 2022
      Glikson and Woolley 2020
      Hoff and Bashir 2015
      Schlicker et al 2025
      NIST 2023
    R4 Interface
      Bickmore and Cassell 2001
      Zierau et al 2021
      Vossing et al 2022
      Van Pinxteren et al 2019
    R5 Personality
      Riedl 2022
      Szalma and Taylor 2011

2.3 The Iceberg Metaphor: Theoretical Justification

The iceberg metaphor is not decorative. It captures a structural insight that emerges independently from multiple strands of trust research:

McKnight et al. (2002) distinguish between trust constructs that are directly observable (third-party seals, privacy policies, website quality) and those that are hidden psychological states (trusting beliefs, trusting intentions, disposition to trust). Their empirical model demonstrates that observable cues influence hidden beliefs, which in turn drive behavioral intentions. The waterline in the iceberg represents this causal boundary.

Hoffmann, Lutz, and Meckel (2014), in a structural-equation-modeling study of online trust among German Internet users, report that trust cues operate through distinct pathways: reciprocity cues build trusting beliefs, while brand cues bypass beliefs and drive intentions directly. This grounding supports treating above-waterline cues as operating through more than one mechanism and motivates the multi-construct above-waterline architecture (see Section 6.2, Decisions 2 and 3). Specific path coefficients and sample-size details cited elsewhere in this document are drawn from the published SEM and are verified against the primary source.

Schlicker et al.’s (2025) Trustworthiness Assessment Model (TrAM) provides the most rigorous recent specification of this above/below distinction. At the micro level, the TrAM applies Brunswik’s Lens Model: actual trustworthiness manifests through cues, and trustors detect and utilize those cues to form perceived trustworthiness. The above-waterline constructs in the Iceberg Trust Model function as the cue layer (the lens through which trustworthiness is perceived), while the below-waterline constructs represent the assessment processes (cue detection, utilization, belief formation). This mapping was a deliberate design choice, informed by TrAM, to ensure the framework reflects the empirically established distinction between what is observable and what is inferred.

2.4 The Multi-Level Digital Trust Framework

The Iceberg Trust Model comprises four interdependent layers, each representing a distinct mechanism through which trust is produced, maintained, or eroded:

graph TD
    A["Agency Layer<br/><i>Human experience, authenticity,<br/>perception, emotional resonance</i>"] --> E
    E["Engineering Layer<br/><i>Verifiable identity, provenance,<br/>security, hybrid architectures</i>"] --> G
    G["Governance Layer<br/><i>Adaptation, resilience,<br/>continuous assurance</i>"] --> I
    I["Institutional Layer<br/><i>Regulatory standards,<br/>public infrastructures, societal oversight</i>"]

    style A fill:#d4e8ef,stroke:#94b8c8,color:#1e3a4d
    style E fill:#9dc5d4,stroke:#6fa8be,color:#1e3a4d
    style G fill:#6fa8be,stroke:#4e8ba3,color:#fff
    style I fill:#4e8ba3,stroke:#3f7d96,color:#fff
Layer Description Trust Mechanism Grounding
Agency Trust shaped through human experience, authenticity, and autonomy preservation Reciprocity, Brand identity Blau (1964); Spence (1973); Hoffmann et al. (2014)
Engineering Trust produced through verifiable identity, provenance, and technical reliability Technical Trust Infrastructure, Social Trust Mechanisms McKnight et al. (2002); Sollner, Hoffmann, & Leimeister (2016); Helbing (2015)
Governance Trust maintained through organizational adaptation, resilience, and assurance Governance, Resilience & Assurance NIST (2023); EU AI Act (2024); IIA (2020); Hollnagel, Woods, & Leveson (2006)
Institutional Trust maintained through regulatory standards and societal oversight Institution-based, Trusting Beliefs, Disposition to Trust, Trusting Intentions & Behaviors Luhmann (1979); McKnight et al. (2002); Mayer et al. (1995)

The layer ordering is not arbitrary. It reflects Luhmann’s (1979) systems-theoretic principle that higher-level trust mechanisms (personal experience, brand familiarity) depend on lower-level guarantees (institutional assurance, regulatory frameworks), a principle further developed in Sollner, Hoffmann, and Leimeister (2016) and Helbing (2015). When any lower layer is absent, upper-layer trust signals become unreliable: a brand’s promise of data privacy (Agency Layer) is meaningless without encryption and access controls (Engineering Layer), governance policies (Governance Layer), and enforceable regulations (Institutional Layer). Trust emerges when all layers reinforce one another; it collapses when any layer is absent, as articulated in Glinz (2025).


3. Methodological Framework

3.1 Why a Grounded-Theory Literature Review?

The Iceberg Trust Model was developed using the grounded-theory literature review approach as formalized by Wolfswinkel, Furtmueller, and Wilderom (2013) in the European Journal of Information Systems. This method adapts the three-phase coding process of Glaser and Strauss (1967) and Strauss and Corbin (1990, 1998) to the structured synthesis of an existing literature corpus, rather than to iterative theoretical sampling from fresh empirical fieldwork. It is the appropriate qualitative analog when the primary data source is a defined body of published scholarship.

The approach was selected over deductive approaches (e.g., starting from a single theoretical model and testing hypotheses) for three reasons:

  1. Disciplinary integration. Trust research spans at least six disciplines (organizational psychology, information systems, economics, governance, resilience engineering, HCI). No single deductive framework accommodates all of them. An inductive coding approach allows categories to emerge from the literature without imposing a single discipline’s taxonomy.
  2. Framework construction. The goal was to build a comprehensive multi-level classification scheme, not to test a specific hypothesis. Coding procedures derived from grounded theory are well suited to this synthesis task when applied to literature (Wolfswinkel et al., 2013). An alternative lineage with the same scope is qualitative framework synthesis (Barnett-Page & Thomas, 2009).
  3. Transparency and reproducibility. The three-phase coding process (open coding, axial coding, selective coding) provides an explicit audit trail from raw source passages to theoretical categories, enabling other researchers to scrutinize and replicate the analytical decisions (Saldana, 2021, Chapter 15).

Methodological framing note. This study is not a grounded theory in the Glaserian sense of iterative theoretical sampling from empirical fieldwork. Source selection was purposive from a predefined interdisciplinary literature corpus. The Wolfswinkel et al. (2013) approach explicitly legitimates this adaptation: grounded-theory coding procedures applied to a literature corpus for the purpose of rigorous synthesis.

3.2 The Three Coding Phases

The model development followed the three sequential phases specified by Strauss and Corbin (1998, Part II: Coding Procedures):

Phase 1: Open Coding (First Cycle). Line-by-line analysis of each source text, extracting discrete concepts (codes) that capture trust-relevant phenomena. Each code consists of: (a) a short label, (b) a definition, (c) the source text passage, and (d) the discipline of origin. The goal is to “open up” the data and identify as many relevant concepts as possible without imposing preconceived categories (Strauss & Corbin, 1998, Chapter 8). Key operations include labeling, categorizing, and identifying properties and dimensions for each emerging category (Corbin & Strauss, 2015, Chapter 12).

Phase 2: Axial Coding (Second Cycle). The core analytical phase. The term “axial” refers to coding around the axis of a category: identifying relationships between the open codes and reassembling them into coherent categories and subcategories (Strauss & Corbin, 1998, p. 123). Strauss and Corbin proposed a coding paradigm that structures these relationships through six elements: phenomenon, causal conditions, context, intervening conditions, action/interaction strategies, and consequences. In the fourth edition, Corbin and Strauss (2015) simplified this to three core elements: conditions, actions-interactions, and consequences. The key analytical operation is constant comparison: comparing code against code, category against category, to identify similarities and differences (Glaser & Strauss, 1967; Charmaz, 2006).

Phase 3: Selective Coding. Identification of the core category that integrates all other categories into a coherent theoretical narrative (Strauss & Corbin, 1998, Chapter 12). The core category answers: “What is this research fundamentally about?”

flowchart LR
    Corpus["34-Source<br/>Literature Corpus<br/><i>R1 to R5 coverage</i>"]
    Open["Phase 1<br/>Open Coding<br/><b>247 concepts</b><br/><i>line-by-line</i>"]
    CC["Constant<br/>Comparison<br/><i>Glaser and Strauss 1967</i>"]
    Axial["Phase 2<br/>Axial Coding<br/><b>15 categories</b><br/><i>conditions actions consequences</i>"]
    Sel["Phase 3<br/>Selective Coding<br/><b>Core category</b><br/><i>digital trust formation</i>"]
    Frame["Framework Construction<br/><b>10 L1 constructs</b><br/><b>124 L2 cues</b><br/><i>9 design decisions + ATB</i>"]

    Corpus --> Open
    Open --> CC
    CC --> Axial
    Axial --> Sel
    Sel --> Frame

    style Corpus fill:#f4f4f4,stroke:#999,color:#222
    style Open fill:#d4e8ef,stroke:#94b8c8,color:#1e3a4d
    style CC fill:#b8d6e0,stroke:#8ab0c0,color:#1e3a4d
    style Axial fill:#9dc5d4,stroke:#6fa8be,color:#1e3a4d
    style Sel fill:#6fa8be,stroke:#4e8ba3,color:#fff
    style Frame fill:#4e8ba3,stroke:#3f7d96,color:#fff

3.3 How Axial Coding Was Applied

The following step-by-step procedure was followed:

Step 1: Corpus assembly. Sources were selected using purposive theoretical sampling (Glaser & Strauss, 1967), organized by the R1-R5 framework to ensure coverage across all five trust conceptualizations (see Section 4).

Step 2: Open coding pass. Each source was read in full. For every trust-relevant concept encountered, a code was created with label, definition, source passage, and discipline. This produced approximately 250 trust-related concepts across seven disciplinary domains (see Audit Trail, Section 3).

Step 3: Constant comparison. Codes were compared pairwise to identify overlaps, distinctions, and hierarchical relationships. For example, “structural assurance” (McKnight et al., 2002) and “regulatory framework” (EU AI Act, 2024) share the property of “institutional protection mechanism” but differ on the dimension of “formality” (legal statute vs. perceived belief).

Step 4: Category formation (axial coding). Codes were grouped into categories based on shared properties and dimensions, using the coding paradigm (conditions, actions-interactions, consequences) as a structuring device. Each category was named using the most descriptive label from the literature. This produced 15 natural categories (see Grounded Theory Coding Audit Trail, Section 5).

Step 5: Paradigm mapping. For each category, the coding paradigm was applied: What conditions give rise to this trust phenomenon? What actions/interactions does it involve? What consequences does it produce?

Step 6: Selective coding. The 15 categories were integrated around the core category of “digital trust formation”, revealing that the literature captures both the static architecture of trust (constructs and cues) and the dynamic processes (formation, calibration, violation, repair).

Step 7: Framework construction. The 15 emergent categories were consolidated into 10 L1 constructs through deliberate design decisions documented in Section 6 (the initial axial consolidation yielded 9 constructs; Affective Trusting Beliefs was subsequently added as a 10th construct alongside the Cognitive Trusting Beliefs per McAllister’s (1995) cognition/affect distinction, confirmed for AI by Glikson & Woolley (2020) and Schlicker et al. (2025)). The consolidation involved merging categories that share functional roles, elevating categories whose digital-context importance warrants distinct treatment, and placing categories at their appropriate level (construct, environmental moderator, or process overlay).

3.4 Methodological References


4. Literature Corpus

4.1 Sampling Strategy

The framework was constructed from 34 primary sources and 17 cross-cutting frameworks spanning 1964 to 2025. Sources were selected using purposive sampling from the interdisciplinary trust literature, consistent with the grounded-theory literature-review approach of Wolfswinkel et al. (2013). The corpus was compiled to achieve conceptual coverage across the R1-R5 relationship types, with at least two sources per relationship type to support constant-comparison analysis. This is not a systematic review in the PRISMA sense; it is a theoretical review (Pare, Trudel, Jaana, & Kitsiou, 2015) aimed at framework construction.

Sources were identified through three channels: (1) database searches across Scopus, Web of Science, ACM Digital Library, IEEE Xplore, PsycINFO, and Google Scholar, structured by the R1-R5 framework; (2) backward and forward citation tracing from anchor sources (Mayer et al., 1995; McKnight et al., 2002); and (3) institutional sources (NIST, EU AI Act, IIA) not indexed in academic databases.

Selection criteria: disciplinary breadth across seven fields, citation influence (>500 citations for pre-2010 works), temporal breadth (1964-2025), methodological diversity, at least two sources per R1-R5 conceptualization, and conceptual coverage plateau as the stopping criterion (see Section 4.2).

The above-waterline cue derivation additionally drew on the ITI Questionnaire v8 (2025), an instrument under development by the author (see Section 9.1 and the Limitations in Section 4.9). Full psychometric validation of the ITI Questionnaire (pilot, EFA, CFA) is the subject of a forthcoming paper. In the present work, the v8 item pool functioned as a structured prompt for cue derivation; claims of methodological triangulation are deferred until the instrument has been externally administered.

4.2 Conceptual Coverage Assessment

The present work does not claim theoretical saturation in the Glaserian sense, which would require iterative sampling driven by ongoing analysis. Instead, the corpus was assessed for conceptual coverage plateau: the cumulative count of distinct axial categories ceased to grow as additional sources were coded. Category emergence reached a plateau at source #25 (Hollnagel et al., 2006), with all 15 axial categories established. Sources 26-34 enriched existing categories with new properties and dimensions but did not produce new categories. This pattern is consistent with Guest, Bunce, and Johnson’s (2006) finding that thematic plateau typically occurs between 12 and 30 sources. The full coverage assessment (including source-by-category emergence matrix and empirical coverage from incident classification) is documented in the Audit Trail, Section 9.

4.7 Complete Source Corpus (34 Primary Sources by R1-R5 Conceptualization)

The corpus is organized by the R1-R5 conceptualization that each source primarily addresses. Many sources contribute to multiple conceptualizations; the primary assignment reflects each source’s dominant contribution. The full corpus with detailed contribution descriptions is documented in the Grounded Theory Coding Audit Trail, Section 2.3.

R1: Trust in Persons and Organizations (Interpersonal trustworthiness)

# Source Type Key Contribution
1 Mayer, Davis, & Schoorman (1995) Foundational theory ABI trustworthiness model; risk moderation
2 Giffin (1967) Foundational theory Source credibility as trust precursor
3 Deutsch (1976) Foundational theory Trust as cooperative expectation; equity norms
4 Fukuyama (1995) Social theory Social capital; cultural trust radius
5 Rousseau, Sitkin, Burt, & Camerer (1998) Definitional consensus Cross-disciplinary trust definition; risk as precondition
6 McAllister (1995) Empirical (SEM) Cognition-based vs. affect-based trust
7 Blau (1964) Foundational theory Social exchange theory; reciprocity
8 Spence (1973) Foundational theory Signaling under information asymmetry

R2: Trust in Digital Economy Agents (Human-to-digital-intermediary)

# Source Type Key Contribution
9 Ripperger (2003) Economic theory Trust as implicit contract; uncertainty management
10 McKnight, Cummings, & Chervany (1998) Theory Initial trust formation
11 McKnight, Choudhury, & Kacmar (2002) Empirical + theory E-commerce trust typology
12 Gefen (2000) Empirical (SEM) Familiarity in e-commerce trust
13 Gefen, Karahanna, & Straub (2003) Empirical (SEM) Trust and TAM integration
14 Koufaris & Hampton-Sosa (2004) Empirical Initial trust in online companies
15 Dinev & Hart (2006) Empirical Extended privacy calculus
16 Hoffmann, Lutz, & Meckel (2014) Empirical (SEM) Trust cue effects by user segment
17 Pavlou & Gefen (2004) Empirical Institution-based marketplace trust
18 Hendrikx, Bubendorfer, & Chard (2015) Survey/taxonomy Reputation systems classification

R3: Trust in AI Systems (Human-to-AI-system)

# Source Type Key Contribution
19 Lukyanenko et al. (2022) Theoretical framework Trust as mental process considering AI characteristics
20 Glikson & Woolley (2020) Review Anthropomorphism activates emotional trust
21 Muir (1994) Foundational theory Trust calibration in human-machine interaction
22 Hoff & Bashir (2015) Integrative review Three temporal layers: dispositional, situational, learned
23 Choung, David, & Ross (2022) Empirical Trust in AI and acceptance
24 Thiebes, Lins, & Sunyaev (2021) Review Trustworthy AI requirements
25 Hollnagel, Woods, & Leveson (2006) Foundational theory Resilience engineering
26 NIST (2023) Government framework AI Risk Management Framework
27 Schlicker, Baum et al. (2025) Conceptual model TrAM; Brunswik’s Lens Model for trust
28 Schlicker, Lechner et al. (2025) Qualitative (N=65) LLM trustworthiness assessment

R4: Trust in the Interface (Human-to-interface relational trust)

# Source Type Key Contribution
29 Bickmore & Cassell (2001) Empirical + design Relational agents; rapport-building
30 Zierau, Engel, Satzger, & Schwabe (2021) Design science Conversational agent trust design
31 Vossing, Kuhl, Lind, & Satzger (2022) Design science Transparency design for human-AI collaboration
32 Van Pinxteren et al. (2019) Experimental Anthropomorphic trust effects

R5: Trust and Personality (Individual dispositional differences)

# Source Type Key Contribution
33 Riedl (2022) Review Personality-trust neuroscience; Big Five
34 Szalma & Taylor (2011) Experimental Five Factor Model and automation trust

Cross-cutting frameworks (referenced throughout but not primary coded sources): EU AI Act (2024), ISACA DTEF (2024), WEF (2022), IIA (2020), Floridi & Cowls (2019), Botsman (2017), Lewicki & Bunker (1995), Lewicki & Brinsfield (2017), Lewicki, McAllister, & Bies (1998), McKnight & Chervany (2001), Kim et al. (2004, 2008), Lankton et al. (2015), Sollner et al. (2016), Helbing (2015), Raji et al. (2020), Makridis et al. (2024), Lahusen et al. (2024), Schuetz et al. (2025), Doney, Cannon, & Mullen (1998), Bart et al. (2005).

4.9 Acknowledged Limitations and Mitigation

Limitation Description Mitigation Residual Risk
Not a full PRISMA systematic review Purposive sampling from the published literature, not exhaustive retrieval. Relevant publications may have been missed. (1) PRISMA-informed documentation of search protocol, databases, strings, and inclusion/exclusion criteria per STARLITE (Booth, 2006). (2) Conceptual coverage assessment shows no new categories from additional sources (Guest et al., 2006). (3) R1-R5 framework ensures no trust relationship type is systematically underrepresented. (4) Wolfswinkel, Furtmueller, and Wilderom (2013) explicitly legitimate grounded-theory coding applied to a literature corpus. Low. The coverage plateau provides a principled stopping criterion. Any missed source would need to produce a 16th axial category to change the framework’s structure.
Language bias All sources are in English (with one German-language source: Ripperger, 2003). English is the dominant publication language for trust research. German inclusion captures Luhmann (1979) and Ripperger (2003), the two most cited non-English trust conceptualizations. Low. Major non-English trust research (e.g., Japanese, Chinese) is typically published in English for international audiences.
Recency bias AI governance frameworks (2023-2025) have not yet been subjected to longitudinal empirical validation. Balanced by foundational works (1964-1998) with decades of validation. AI governance sources included because the framework must address current regulatory reality. Medium. AI governance constructs may require revision as regulation matures. The modular architecture (L2 cues within L1 constructs) supports targeted updates without restructuring the entire framework.
Single-coder analysis Coding performed by a single researcher, introducing potential analytical bias. (1) Documented constant comparison protocol enabling independent audit (see Coding Audit Trail). (2) Paradigm mapping (conditions, actions-interactions, consequences) for every category. (3) Internal consistency check (construct boundary validation) through constant comparison (Glaser & Strauss, 1967); boundary cases with decision rationales are documented. (4) Preliminary evidence from adjacent trust literature (Gefen, Karahanna, & Straub, 2003; Kim, Ferrin, & Rao, 2008; Beldad, de Jong, & Steehouder, 2010; Kaplan et al., 2023) is consistent with the above/below waterline distinction. (5) Schlicker et al. (2025) qualitative study (N=65) converges with the framework’s assessment-process and cue-layer architecture. Medium. Formal inter-rater reliability cannot be reported. Future work: expert panel validation (Delphi method) to establish content validity; independent dual coding of a random source subset.
ITI Questionnaire v8 status The ITI Questionnaire v8 (2025) is an instrument under development by the author, consisting of 72 draft items across the R, B, TI, and ST constructs. It has not yet been administered to an external sample. In this work the v8 item pool functioned as a structured prompt for cue derivation only. Claims of methodological triangulation between the instrument and the academic coding are deferred until external administration. Medium. Validation (pilot, EFA, CFA) is the subject of a forthcoming paper (see Section 18, Future Work).
Scope: construct validity only This methodology assesses construct validity through theoretical grounding and internal consistency checking. Predictive validity (does the framework predict trust outcomes?) requires separate empirical investigation. Stated as an explicit scope boundary. Independent empirical validation is identified as a priority for future research. Preliminary evidence from adjacent trust literature is cited in Section 12. Medium. Predictive validation is the next methodological priority.

5. Open Coding Results

Open coding of the 34 primary sources and 17 cross-cutting frameworks extracted approximately 250 trust-related concepts across seven disciplinary domains. The discipline-level counts are summarized below (the full category-level analysis is documented in the Audit Trail, Section 3):

Discipline Codes Representative Concepts Source(s)
Organizational Psychology 38 Ability, benevolence, integrity, predictability, cognition-based trust, affect-based trust, emotional bonds, trust repair, apology strategy, denial strategy Mayer et al. (1995); McAllister (1995); McKnight et al. (2002); Kim et al. (2004)
Information Systems 42 Structural assurance, situational normality, familiarity, system quality, user control, third-party endorsements, initial trust, trust typology, trust transfer McKnight et al. (1998, 2002); Gefen (2000, 2003); Hoffmann et al. (2014); Sollner et al. (2016)
Economics / Signaling 18 Costly signals, information asymmetry, brand investment, warranties, implicit contracts, uncertainty stabilization, privacy calculus, data reciprocity Spence (1973); Ripperger (2003); Dinev & Hart (2006); Blau (1964)
Social Psychology 24 Reciprocity norms, social exchange, social proof, distrust independence, reputation systems, endorsement credibility, content integrity Blau (1964); Lewicki et al. (1998); Pavlou & Gefen (2004); Hendrikx et al. (2015)
Governance / Regulation 36 Risk classification, transparency obligations, accountability, human oversight, fairness auditing, adaptive policy, resilience, stakeholder engagement NIST (2023); EU AI Act (2024); IIA (2020); Floridi & Cowls (2019); Hollnagel et al. (2006)
HCI / AI Trust 52 Cue relevance, cue detection, cue utilization, individual standards, esthetics as metastandard, relational intelligence, automation bias, algorithm aversion, system-like trust Schlicker et al. (2025); Glikson & Woolley (2020); Bickmore & Cassell (2001); Lukyanenko et al. (2022); Lankton et al. (2015)
Personality / Individual Differences 16 Faith in humanity, trusting stance, risk propensity, technology readiness, Big Five personality-trust links, digital native vs. digital immigrant patterns Riedl (2022); Szalma & Taylor (2011); Hoffmann et al. (2014); Hoff & Bashir (2015)
Trust Dynamics / Temporal 21 Calculus-based trust, knowledge-based trust, identification-based trust, trust calibration, overtrust, undertrust, relationship equity, feedback loop Lewicki & Bunker (1995); Lee & See (2004); de Visser et al. (2020); Kim et al. (2004, 2008)

6. Axial Coding: From Open Codes to 15 Categories to 9 Constructs

6.1 The 15 Emergent Categories

Grouping the open codes by functional relationships, using the coding paradigm (conditions, actions-interactions, consequences) and constant comparison, produced 15 natural categories. These categories emerged from the data through inductive analysis. The complete paradigm mapping for each category is documented in the Grounded Theory Coding Audit Trail, Section 5.

Category 1: Trustworthiness Beliefs (ABI+P)

Discipline: Organizational Psychology. Core concepts: Ability/Competence, Benevolence, Integrity, Predictability. Key sources: Mayer et al. (1995); McKnight et al. (2002); Lee & See (2004). Empirical grounding: Extremely strong (>20,000 citations for Mayer model alone).

Category 2: Dispositional Trust

Discipline: Personality Psychology. Core concepts: Faith in humanity, trusting stance, risk propensity, technology readiness. Key sources: McKnight et al. (2002); Rotter (1967); Riedl (2022). Empirical grounding: Extremely strong.

Category 3: Institutional Trust

Discipline: Sociology / IS. Core concepts: Structural assurance, situational normality, regulatory framework, platform trust. Key sources: McKnight et al. (2002); Zucker (1986); Gefen & Pavlou (2012). Empirical grounding: Extremely strong.

Category 4: Trust Intentions and Behavior

Discipline: Social Psychology / Theory of Reasoned Action. Core concepts: Willingness to depend, information disclosure, purchase intention, continued use, recommendation behavior. Key sources: McKnight et al. (2002); Ajzen (1991); Gefen et al. (2003). Empirical grounding: Strong.

Category 5: Brand and Reputation Signals

Discipline: Marketing / Economics. Core concepts: Brand trust (reliability + intentionality), costly signaling, familiarity, design quality, brand affect. Key sources: Spence (1973); Chaudhuri & Holbrook (2001); Hoffmann et al. (2014). Empirical grounding: Strong.

Category 6: Fair Exchange and Reciprocity

Discipline: Social Exchange Theory. Core concepts: Data reciprocity, fair information practices, warranties, pricing transparency, procedural fairness. Key sources: Blau (1964); Ashworth & Free (2006); Hoffmann et al. (2014). Empirical grounding: Moderate as a trust construct; strong as a mechanism.

Category 7: Privacy, Security, and Technical Trust Infrastructure

Discipline: IS / Computer Science / Cybersecurity. Core concepts: User control over personal data, identity and access management, encryption and privacy-enhancing technologies, zero-knowledge proofs, data portability, federated learning, zero trust architecture, secure data governance. Key sources: McKnight et al. (2002); Smith, Dinev, & Xu (2011); NIST (2023); Helbing (2015); Sollner et al. (2016). Empirical grounding: Strong for the domain.

Category 8: Social Proof, Reputation, and Community Trust Mechanisms

Discipline: Social Psychology / HCI / Platform Economics. Core concepts: Reputation systems, third-party endorsements and seals, user-generated reviews, community moderation, social translucence, content integrity safeguards. Key sources: Pavlou & Gefen (2004); Bart, Shankar, Sultan, & Urban (2005); Hoffmann et al. (2014); Hendrikx, Bubendorfer, & Chard (2015). Empirical grounding: Strong for the domain.

Category 9: Governance and Organizational Accountability

Discipline: Governance / Regulation. Core concepts: Adaptive policy, risk management, audit and assurance, regulatory compliance, incident response, resilience, stakeholder engagement. Key sources: NIST (2023); EU AI Act (2024); IIA (2020); Hollnagel et al. (2006). Empirical grounding: Strong for individual components.

Category 10: Perceived Risk

Discipline: Decision Theory / Psychology. Core concepts: Risk as trust moderator, dual-pathway model, vulnerability acceptance, stakes assessment. Key sources: Mayer et al. (1995); Kim, Ferrin, & Rao (2008); Rousseau et al. (1998). Empirical grounding: Extremely strong.

Category 11: Affective Trust

Discipline: Organizational Psychology. Core concepts: Emotional bonds, empathy, care and concern, affect-based vs. cognition-based trust. Key sources: McAllister (1995); Lewis & Weigert (1985); Glikson & Woolley (2020); Bickmore & Cassell (2001). Empirical grounding: Extremely strong.

Category 12: Trust Dynamics and Lifecycle

Discipline: Trust Theory. Core concepts: Calculus-based to identification-based stages, trust calibration, feedback loops, temporal evolution. Key sources: Lewicki & Bunker (1995); McKnight et al. (1998); Mayer et al. (1995); Schlicker et al. (2025). Empirical grounding: Strong.

Category 13: Trust Repair

Discipline: Organizational Psychology. Core concepts: Competence vs. integrity violations, apology vs. denial strategies, recovery mechanisms. Key sources: Kim, Ferrin, Cooper, & Dirks (2004); Lewicki & Brinsfield (2017); Tomlinson, Nelson, & Langlinais (2020). Empirical grounding: Strong.

Category 14: Distrust as Separate Construct

Discipline: Social Psychology. Core concepts: Trust and distrust as independent dimensions, simultaneous trust/distrust, watchful trust. Key sources: McKnight & Chervany (2001); Lewicki et al. (1998); Lahusen et al. (2024). Empirical grounding: Moderate to strong.

Category 15: AI-Specific Trust Dimensions

Discipline: HCI / AI Ethics. Core concepts: Automation bias, algorithm aversion, anthropomorphism effects, LLM trustworthiness (truthfulness, safety, fairness), system-like vs. human-like trust. Key sources: Lankton, McKnight, & Tripp (2015); Glikson & Woolley (2020); Schlicker et al. (2025). Empirical grounding: Emerging but rapidly growing.

6.2 Consolidation: From 15 Categories to 10 Constructs

The 15 emergent categories were consolidated into 10 L1 constructs through deliberate design decisions. The axial consolidation produced an initial 9 constructs; Decision 7 below records the addition of Affective Trusting Beliefs as a 10th construct distinct from the Cognitive Trusting Beliefs. Each decision is documented with explicit rationale.

Decision 1: Categories 1-4 become below-waterline constructs. Categories 1 (Trustworthiness Beliefs), 2 (Dispositional Trust), 3 (Institutional Trust), and 4 (Trust Intentions/Behavior) are psychological states and behavioral outcomes, not observable cues. They were placed below the waterline in the Institutional Layer, faithfully implementing McKnight et al.’s (2002) trust typology. The four below-waterline constructs are: Trusting Beliefs (TB), Disposition to Trust (DT), Institution-based (IB), and Trusting Intentions & Behaviors (TIB).

Academic justification: McKnight et al. (2002) is the most validated trust typology in IS research. Departing from its structure would require strong evidence that an alternative taxonomy better captures the latent trust constructs. No such evidence was found.

Decision 2: Category 5 (Brand) becomes an L1 construct. Brand and reputation signals were elevated to a primary above-waterline construct because signaling theory (Spence, 1973) and brand-trust research (Chaudhuri & Holbrook, 2001; Erdem & Swait, 1998) treat brand investment as a costly signal with a trust pathway distinct from trustworthiness-belief formation, and because Hoffmann, Lutz, and Meckel (2014) report that brand cues drive behavioral intentions through a direct pathway rather than exclusively through trusting beliefs. This direct pathway supports treating Brand as a first-class construct rather than subsuming it under another category. Brand is placed in the Agency Layer because brand perception is fundamentally a human-experience phenomenon shaped by familiarity, narrative, and emotional resonance (Chaudhuri & Holbrook, 2001).

Decision 3: Category 6 (Reciprocity) becomes an L1 construct. In established trust literature, reciprocity is typically treated as a mechanism or antecedent (Blau, 1964; Cialdini, 2001) rather than a trust construct containing sub-cues. The decision to elevate Reciprocity to a primary construct was driven by its empirical salience in digital contexts: Hoffmann, Lutz, and Meckel (2014) report that reciprocity cues have a strong effect on trusting beliefs, among the strongest cue categories tested. Furthermore, in digital platform economies, the fairness of value exchange (data reciprocity, pricing transparency, algorithmic fairness) is a central trust concern that cross-cuts traditional construct boundaries (Ashworth & Free, 2006; Dinev & Hart, 2006). Reciprocity is placed in the Agency Layer because fair exchange is experienced at the human-interaction level.

Decision 4: Categories 7 and 8 become two Engineering Layer constructs. Category 7 (technical trust infrastructure) and Category 8 (social proof/community mechanisms) capture distinct trust-producing mechanisms: technology-mediated trust and socially-mediated trust, respectively (Sollner et al., 2016). The literature’s natural higher-level distinction is between these two modes of trust production. The two constructs are named Technical Trust Infrastructure (TI) and Social Trust Mechanisms (ST), following Sollner et al.’s (2016) distinction. They are placed in the Engineering Layer because both involve designed systems (technical or social) that produce trust through verifiable mechanisms.

Note on terminology: Earlier iterations of the model used the labels “Social Adaptor” and “Social Protector.” These were replaced with terminology that more directly reflects the academic literature. The underlying phenomena are extensively researched: McKnight et al.’s (2002) structural assurance, Helbing’s (2015) trusted web infrastructure, Hendrikx et al.’s (2015) reputation systems taxonomy, and Pavlou and Gefen’s (2004) marketplace trust mechanisms all address these functional domains.

Decision 5: Category 9 (Governance) becomes an L1 construct with three sub-dimensions. Governance, resilience, and assurance could theoretically be modeled as three separate constructs (each has independent grounding). The decision to consolidate them into a single construct with three sub-dimensions (Adaptive Governance, Organizational Resilience, Continuous Digital Assurance) was driven by their operational interdependence: governance without resilience produces brittle compliance; resilience without assurance lacks evidence; assurance without governance lacks authority. The three sub-dimensions align with NIST AI RMF functions and EU AI Act requirements. The construct is placed in the Governance Layer as the sole occupant. The GOV cue derivation methodology is documented in the Audit Trail, Section 5 (Category 9).

Decision 6: Category 10 (Perceived Risk) becomes an environmental moderator, not a construct. This was a critical design decision. Perceived risk emerged as one of the most strongly grounded categories (5/5). However, three theoretical arguments precluded its inclusion as a tenth L1 construct:

Perceived risk is therefore placed at the waterline as the environmental moderator. In the iceberg metaphor, perceived risk is literally the water: it determines how much of the iceberg is visible (in low-risk situations, users scrutinize fewer cues), it refracts cues (the same cue is weighted differently depending on risk context), and it applies pressure (high risk raises the threshold that Trusting Beliefs must reach before producing Trusting Behavior). See Section 8.1 for the full theoretical specification.

Decision 7: Category 11 (Affective Trust) becomes a below-waterline construct. McAllister’s (1995) distinction between cognition-based and affect-based trust is among the most replicated findings in trust research. The decision to add Affective Trusting Beliefs (ATB) as a distinct below-waterline construct was driven by three sources: McAllister (1995) on the cognitive/affective distinction, Glikson and Woolley (2020) on differential activation in AI contexts, and Schlicker et al. (2025) on participants’ expectation of empathy from AI agents. ATB is placed alongside Cognitive Trusting Beliefs (TB) in the Institutional Layer, with four sub-dimensions: Emotional Resonance, Perceived Empathy, Interpersonal Comfort, and Affective Attachment.

Decision 8: Categories 12-13 (Trust Dynamics, Trust Repair) become the Dynamic Process Layer. Trust dynamics and trust repair are not static constructs but temporal processes. Rather than forcing them into the construct classification scheme, they were modeled as a process overlay that operates on the static architecture. The Process Layer comprises three mechanisms: Trust Formation (Lewicki & Bunker, 1995), Trust Calibration (Schlicker et al., 2025; Lee & See, 2004), and Trust Repair (Kim et al., 2004). See Section 8.2.

Decision 9: Categories 14-15 (Distrust, AI-Specific) are distributed across existing constructs. Distrust as a separate dimension (McKnight & Chervany, 2001) and AI-specific trust dimensions (Lankton et al., 2015) were not modeled as standalone constructs. Instead, their properties were distributed: - Distrust dynamics are captured in the Process Layer (trust repair) and in each cue’s erode_description field, which specifies how trust is damaged. - AI-specific dimensions are distributed across relevant constructs: AI model provenance in Brand, algorithmic fairness in Reciprocity, AI disclosures and hallucination detection in Technical Trust Infrastructure, adversarial robustness and bias auditing in Governance, and system-like trust sub-dimensions in Trusting Beliefs.

This distribution strategy follows the principle that AI trust is not a separate domain but a lens through which all trust constructs operate differently (Lankton et al., 2015).

6.3 Academic Grounding Assessment

The following assessment rates each L1 construct on the strength of its academic grounding:

Construct Rating Justification
Disposition to Trust (DT) 5/5 Perfectly aligned with McKnight et al. (2002). Canonical construct since Rotter (1967).
Institution-based (IB) 5/5 Faithfully reproduces structural assurance and situational normality (McKnight et al., 2002; Zucker, 1986).
Trusting Beliefs (TB) 5/5 Core ABI model (Mayer et al., 1995). Most-cited trust model in management research.
Affective Trusting Beliefs (ATB) 5/5 McAllister (1995) among most replicated trust findings. Extended to AI by Glikson & Woolley (2020).
Trusting Intentions & Behaviors (TIB) 4/5 Well-validated (McKnight et al., 2002; Ajzen, 1991).
Brand (B) 4/5 Extensive marketing and signaling literature (Spence, 1973; Chaudhuri & Holbrook, 2001; Erdem & Swait, 1998). Grounded as direct pathway to intentions in Hoffmann, Lutz, and Meckel (2014).
Reciprocity (R) 4/5 Grounding in social-exchange theory (Blau, 1964) and the primary-cue-category status reported in Hoffmann, Lutz, and Meckel (2014). Elevation from mechanism to construct is a deliberate design choice for digital contexts.
Governance, Resilience & Assurance (GOV) 4/5 Each sub-component grounded in established frameworks (NIST, 2023; IIA, 2020; Hollnagel et al., 2006).
Technical Trust Infrastructure (TI) 4/5 Domain extensively researched (McKnight et al., 2002 structural assurance; Helbing, 2015 trusted web; Sollner et al., 2016 network of trust).
Social Trust Mechanisms (ST) 4/5 Domain extensively researched (Hendrikx et al., 2015 reputation taxonomy; Pavlou & Gefen, 2004 marketplace trust).

7. L1 Constructs: The Ten Pillars of Digital Trust

7.1 Above the Waterline (Visible Trust Cues)

These five constructs represent the observable signals that organizations can design, deploy, and optimize. They are “above the waterline” because users can perceive, evaluate, and compare them.

Code Construct Layer L2 Cues Description Primary Grounding
R Reciprocity Agency 20 Fair, transparent value exchange. Rewarding kind actions, reducing user concerns through fairness. Blau (1964); Hoffmann et al. (2014)
B Brand Agency 18 Intangible identity, reputation, consistency. Brand investment signals trustworthiness as capital-at-risk. Spence (1973); Chaudhuri & Holbrook (2001)
TI Technical Trust Infrastructure Engineering 20 Technical trust infrastructure: identity, privacy, security, compliance. Interface between cues and foundations. McKnight et al. (2002); Helbing (2015); Sollner et al. (2016)
ST Social Trust Mechanisms Engineering 17 Community-driven trust: reputation systems, endorsements, moderation, social proof. Pavlou & Gefen (2004); Hendrikx et al. (2015)
GOV Governance, Resilience & Assurance Governance 25 Organizational governance through adaptive oversight, operational resilience, and continuous assurance. NIST (2023); IIA (2020); Hollnagel et al. (2006); OECD (2024); ISO/IEC 42001

7.2 Below the Waterline (Hidden Trust Constructs)

These five constructs represent hidden psychological and institutional foundations of trust. Each is grounded in the trust literature (McKnight et al., 2002; Mayer et al., 1995; McAllister, 1995; Gefen et al., 2003; Sollner et al., 2016).

Code Construct Layer L2 Cues Description Primary Grounding
IB Institution-based Institutional 4 Trust in systems/structures even without prior interaction. Structural assurance and situational normality. McKnight et al. (2002); Zucker (1986)
TB Trusting Beliefs (Cognitive) Institutional 7 Cognitive assessments through two contextually activated lenses: the human-like lens (Competence, Benevolence, Integrity, Predictability per Mayer et al., 1995) and the system-like lens (Functionality, Reliability, Helpfulness per Lankton et al., 2015). The trustor applies whichever lens fits the trustee type. Mayer et al. (1995); McKnight et al. (2002); Lankton et al. (2015)
ATB Trusting Beliefs (Affective) Institutional 5 Emotional trust grounded in empathy, attachment, and relational interaction quality. McAllister (1995); Glikson & Woolley (2020); Schlicker et al. (2025)
DT Disposition to Trust Institutional 4 Individual propensity to trust, shaped by personality and experience. McKnight et al. (2002); Rotter (1967); Riedl (2022)
TIB Trusting Intentions & Behaviors Institutional 4 Willingness to act on trust: purchase, share data, engage. The behavioral outcome of all other layers. McKnight et al. (2002); Ajzen (1991)

8. Environmental Moderator and Dynamic Processes

8.1 The Contextual Moderation Layer: The Water

Perceived risk is not a construct in the framework but the environmental moderator that makes the entire framework meaningful. In the iceberg metaphor, perceived risk is the water. However, the water is not uniform: it has properties that vary by context, culture, domain, and user segment. Together, these properties form a Contextual Moderation Layer at the waterline.

8.1.1 Core Moderation Mechanics

  1. The water determines visibility. In low-risk situations (shallow water), most of the iceberg is visible: users do not scrutinize trust cues carefully because stakes are low. In high-risk situations (deep water), only the tip is visible: users examine every available cue because consequences of misplaced trust are severe. This is consistent with Hoffmann, Lutz, and Meckel’s (2014) report that risk-aware users pay more attention to reciprocity cues, while risk-tolerant users rely more on brand heuristics.

  2. The water refracts the cues. The same trust cue is perceived differently depending on risk context. A third-party endorsement (ST02) is a strong signal in a low-risk context (browsing a news site) but may be insufficient in a high-risk context (sharing medical data). This is the cue utilization mechanism from Schlicker et al.’s (2025) TrAM.

  3. The water applies pressure. Water pressure increases with depth. Perceived risk applies pressure on below-waterline constructs: high risk raises the threshold that Trusting Beliefs must reach before translating into Trusting Intentions and Behaviors. This maps to Mayer et al.’s (1995) risk moderation.

8.1.2 Water Properties: Four Contextual Parameters

The water around the iceberg is not uniform. It has measurable properties that modulate how cues are perceived, weighted, and processed. These parameters do not change the shape of the iceberg (the cue taxonomy remains constant) but they change how the iceberg is experienced by the trustor.

Parameter What it modulates Metaphor Key source
Risk magnitude Cue scrutiny depth. Higher risk = more cues examined, higher thresholds required. Water depth Mayer et al. (1995); Kim et al. (2008)
Cultural trust radius Which cue categories are weighted. High-trust-radius cultures (Fukuyama, 1995) weight Brand more heavily and accept institutional assurance more readily. Low-trust-radius cultures weight Governance and Technical Trust Infrastructure cues, demanding verifiable evidence over reputation. Water temperature Fukuyama (1995); Hofstede (2001); Doney, Cannon, & Mullen (1998)
Domain sensitivity Cue threshold levels. Healthcare and financial services demand higher GOV and TI cue satisfaction than entertainment or social media. The same cue set produces different trust outcomes depending on domain stakes. Water salinity Bart et al. (2005)
User segment Cue processing mode. Digital natives use heuristic processing (relying on Brand and design quality); digital immigrants use systematic processing (scrutinizing Reciprocity and TI cues). Age, technology experience, and digital literacy modulate which cues are detected and how they are weighted. Water current Hoffmann et al. (2014)

A warm current (high-trust culture) makes the waterline rise, exposing more of the iceberg: more cues are accepted at face value. A cold current (low-trust culture) pushes the waterline down, submerging more cues beneath scrutiny. The Trust Calibration process (Section 8.2, Process 2) operationalizes these contextual weights through the cue relevance and cue utilization factors from Schlicker et al.’s (2025) TrAM.

Theoretical basis: Mayer et al. (1995, p. 726): risk is a property of the situation, not the trustor or trustee. Kim et al. (2008): trust antecedents operate through dual pathways, simultaneously building trust AND reducing perceived risk through separate mechanisms. Rousseau et al. (1998, p. 395): without risk, trust is unnecessary. Fukuyama (1995): trust radius varies by culture, affecting institutional vs. interpersonal trust reliance. Hoffmann, Lutz, and Meckel (2014): user segments process trust cues through qualitatively different strategies.

8.2 The Dynamic Process Layer

The Process Layer is a temporal overlay that augments the static framework without replacing it. It models three processes that operate continuously on the iceberg’s constructs:

Process 1: Trust Formation (Lewicki & Bunker, 1995, 1996). Trust formation progresses through three stages: Calculus-Based Trust (rational cost-benefit evaluation, driven by above-waterline cues), Knowledge-Based Trust (accumulated experience enabling behavioral prediction, corresponding to TB), and Identification-Based Trust (value alignment and shared identity, mapped to Brand cues B01, B12, B14). Not all relationships progress through all stages.

Process 2: Trust Calibration (Schlicker et al., 2025; Lee & See, 2004). Trust calibration is the ongoing adjustment of perceived trustworthiness in response to new evidence. It depends on four factors from TrAM: cue relevance, cue availability, cue detection, and cue utilization. Calibration dynamics include active search, cross-validation, and intuitive adjustment. De Visser et al.’s (2020) relationship equity model adds that accumulated goodwill allows systems to absorb occasional errors. The Contextual Moderation Layer (Section 8.1.2) parameterizes calibration: cultural trust radius, domain sensitivity, and user segment all influence which cues are detected and how they are weighted during calibration.

The Trust State Vector. Trust Calibration maintains a dimensional trust state for each trustor-trustee relationship. Drawing on McKnight and Chervany’s (2001) finding that trust and distrust are independent dimensions (not opposite poles of a single continuum), and Lewicki, McAllister, and Bies’ (1998) demonstration that a trustor can simultaneously trust one dimension while distrusting another, the calibration process tracks each Mayer dimension independently:

Trust State Vector = {
  competence:  trust (+1) | neutral (0) | distrust (-1),
  integrity:   trust (+1) | neutral (0) | distrust (-1),
  benevolence: trust (+1) | neutral (0) | distrust (-1)
}

In the iceberg metaphor, the Trust State Vector tracks cracks in the ice. When trust erodes past a threshold on one dimension while remaining intact on others, the iceberg develops visible fracture lines. The iceberg does not split into two separate icebergs; it develops stress fractures along dimensional boundaries. “Watchful trust” (Lahusen et al., 2024) is the stable state {competence: +1, integrity: -1, benevolence: 0}: the user trusts the system works but distrusts its honesty, resulting in active monitoring behavior. This operationalizes distrust without requiring a separate distrust construct, consistent with the constant comparison finding (iceberg-audit-trail.md, Section 4) that distrust shares properties with trust calibration but differs in the dimension of active vigilance.

The Trust State Vector connects directly to the incident analysis pipeline: aggregating violation_ctb_breakdown across incidents per entity produces an empirical approximation of that entity’s current Trust State Vector.

Process 3: Trust Repair (Kim et al., 2004; Lewicki & Brinsfield, 2017). Different violation types require different repair strategies: competence-based violations respond to apology + corrective action; integrity-based violations respond to denial + evidence of principles. Tomlinson and Mayer (2009) extended this with causal attribution dimensions (locus, controllability, stability). Repair strategies target specific cracks in the Trust State Vector: a competence-apology aims to move the competence dimension from -1 back toward +1, while an integrity-denial aims to restore the integrity dimension.

flowchart LR
    Form["<b>Trust Formation</b><br/><i>Calculus based<br/>Knowledge based<br/>Identification based</i><br/>Lewicki and Bunker 1995"]
    Cal["<b>Trust Calibration</b><br/><i>maintains Trust State Vector</i><br/>Schlicker et al 2025<br/>Lee and See 2004<br/>de Visser et al 2020"]
    Rep["<b>Trust Repair</b><br/><i>Competence apology<br/>Integrity denial</i><br/>Kim et al 2004<br/>Lewicki and Brinsfield 2017"]

    Form --> Cal
    Cal --> Rep
    Rep -.->|recovery pathway| Form
    Cal -.->|ongoing recalibration| Form

    style Form fill:#d4e8ef,stroke:#94b8c8,color:#1e3a4d
    style Cal fill:#9dc5d4,stroke:#6fa8be,color:#1e3a4d
    style Rep fill:#6fa8be,stroke:#4e8ba3,color:#fff

9. L2 Cue Derivation Methodology

9.1 Above-Waterline Cues: Dual-Source Derivation

The L2 cues for above-waterline constructs were derived through two complementary processes:

Source 1: Practitioner-informed empirical input. The initial cue sets for Reciprocity (R), Brand (B), Technical Trust Infrastructure (TI), and Social Trust Mechanisms (ST) were derived from the ITI Questionnaire v8 (2025), a structured instrument developed through iterative expert consultation. The questionnaire was designed to capture the full range of trust-relevant cues that digital platform users encounter. This produced 72 initial L2 cues across the four constructs.

Source 2: Design-science synthesis of external regulatory and professional frameworks. The Governance, Resilience & Assurance (GOV) construct was developed through a design-science process (Hevner, March, Park, & Ram, 2004) that synthesizes four independent regulatory and professional frameworks: NIST AI RMF 1.0 (2023), the EU AI Act (2024), the IIA Three Lines Model (2020), and resilience engineering principles (Hollnagel, Woods, & Leveson, 2006). Prior author contributions (Glinz, 2025, 2026) articulated this synthesis; the present work operationalizes it as 25 L2 cues. The external frameworks carry the theoretical weight; the Glinz (2025, 2026) texts functioned as prior articulation documents, not as independent evidentiary sources. Independent external validation of the GOV construct is identified as a priority for future work (see Section 16 Limitations). The derivation proceeded through three phases, documented in full in Governance L2 Cues:

  1. Open coding of the four external frameworks plus the prior articulations (Glinz, 2025, 2026) produced 47 governance-related codes.
  2. Axial coding: codes were grouped into three categories (Adaptive Governance, Organizational Resilience, Continuous Digital Assurance) based on functional relationships.
  3. Internal consistency check (constant comparison): each cue was examined for distinctness from its neighbors (constant comparison, Glaser & Strauss, 1967) and for coverage of the governance concepts present in the external source frameworks. Boundary cases with decision rationales are documented in Section 11.

Cross-validation: All L2 cues were validated against the full R1-R5 literature corpus to confirm academic grounding and identify gaps. This cross-check produced additional cues in four constructs: AI-specific reciprocity cues (R19-R20), AI provenance and developer reputation cues (B04, B06), AI trust infrastructure cues (TI01-TI06), LLM governance cues (GOV20-GOV25), and affective trust cues (ATB01-ATB05). The full cross-check is documented in the R1-R5 Cue Completeness Check.

9.2 Below-Waterline Cues: Literature-Derived

The below-waterline constructs carry four L2 cues each, directly derived from the trust literature:

Each cue has six fields: cue_id, cue_name, definition, rationale, engender_description (trust-building), erode_description (trust-damaging).


10. Complete L2 Cue Taxonomy

The framework contains 124 L2 cues across 10 constructs (100 above waterline, 24 below waterline). The tables below are generated from the authoritative Supabase database and reflect the current cue IDs, names, and construct assignments.

10.1 Reciprocity (R01-R20, 20 cues)

The Reciprocity construct captures fair, transparent value exchange. Its elevation from a mechanism (Blau, 1964) to a primary construct is grounded in Hoffmann, Lutz, and Meckel (2014), who report that reciprocity cues have a strong effect on trusting beliefs relative to other cue categories they tested. In digital platform economies, the fairness of the value exchange (what users give vs. what they receive) is a highly influential category of trust signals (Ashworth & Free, 2006; Dinev & Hart, 2006).

ID Cue Theoretical Anchor
R01 Value & Fair Pricing Blau (1964) social exchange
R02 Exchange Transparency Floridi & Cowls (2019)
R03 Accountability & Liability NIST (2023)
R04 Terms, Pricing & Subscription Transparency Ashworth & Free (2006)
R05 Warranties & Guarantees Spence (1973) costly signaling
R06 Customer Service & Support Gefen (2000)
R07 Delivery & Fulfillment Excellence McKnight et al. (2002)
R08 Refund, Return, or Cancellation Policies Blau (1964)
R09 Recognition & Rewards (Loyalty Programs) Cialdini (2001) reciprocity norm
R10 Error & Breach Handling Kim et al. (2004) trust repair
R11 Dispute Resolution & Mediation Lewicki & Brinsfield (2017)
R12 User Education & Guidance Vossing et al. (2022)
R13 Acknowledgment of Contributions Blau (1964)
R14 Micropayments & In-App Purchases Ashworth & Free (2006)
R15 Algorithmic Fairness & Non-Discrimination EU AI Act (2024); NIST (2023)
R16 Proactive Issue Resolution Tomlinson et al. (2020)
R17 Informed Defaults Dinev & Hart (2006)
R18 Data Reciprocity Dinev & Hart (2006); Ripperger (2003)
R19 AI Explanation Reciprocity Lankton et al. (2015); Vossing et al. (2022)
R20 Privacy-Value Exchange Visibility Dinev & Hart (2006); Koufaris & Hampton-Sosa (2004)

10.2 Brand (B01-B18, 18 cues)

The Brand construct captures intangible identity, reputation, and consistency. In signaling theory (Spence, 1973), brand investment functions as a costly signal: organizations that invest heavily in brand reputation have more to lose from trust violations, making their trust signals more credible. Hoffmann, Lutz, and Meckel (2014) report that brand cues drive behavioral intentions through a pathway distinct from trusting-beliefs formation, supporting the treatment of Brand as a distinct construct rather than a sub-category of another trust signal group.

ID Cue Theoretical Anchor
B01 Brand Ethics & Moral Values Mayer et al. (1995) integrity
B02 Brand Investment as Costly Signal Spence (1973)
B03 Brand Image & Reputation Chaudhuri & Holbrook (2001)
B04 AI Model Provenance Lukyanenko et al. (2022); EU AI Act (2024)
B05 Recognition & Market Reach Gefen (2000) familiarity
B06 Developer Reputation Schlicker et al. (2025)
B07 Familiarity & Cultural Relevance Hofstede (2001); Fukuyama (1995)
B08 Personalized Brand Experience Bart et al. (2005)
B09 Brand Story & Narrative Botsman (2017)
B10 Design Quality & Aesthetics Schlicker et al. (2025) esthetics as metastandard
B11 Brand Consistency & Cohesion Erdem & Swait (1998)
B12 Heritage & Longevity Gefen (2000) familiarity
B13 Cultural & Societal Impact Fukuyama (1995)
B14 Localized & Inclusive Expressions Hofstede (2001)
B15 Brand Purpose & Mission Lewicki & Bunker (1995) IBT
B16 Branded or Immersive Experiences Chaudhuri & Holbrook (2001)
B17 Values & Impact Commitments Floridi & Cowls (2019)
B18 Digital Experience Innovation Schlicker et al. (2025)

10.3 Technical Trust Infrastructure (TI01-TI20, 20 cues)

This construct captures the technical mechanisms through which trust is produced: identity management, privacy-enhancing technologies, cybersecurity, and algorithmic transparency. The construct corresponds to what Sollner et al. (2016) term technology-mediated trust and what McKnight et al. (2002) capture under structural assurance. The decision to treat these as a single construct rather than decomposing them into sub-constructs (security, privacy, transparency) was driven by their operational interdependence in platform architectures: privacy depends on security, transparency depends on identity, and all depend on robust infrastructure.

ID Cue Theoretical Anchor
TI01 Model Cards & Training Documentation EU AI Act Art. 13; Mitchell et al. (2019)
TI02 Hallucination Detection & Mitigation Schlicker et al. (2025); Huang et al. (2024)
TI03 UX Familiarity & Interface Conventions Gefen (2000); Bickmore & Cassell (2001)
TI04 Adaptive Communication & Responsiveness Zierau et al. (2021); Vossing et al. (2022)
TI05 AI System Self-Disclosure Lukyanenko et al. (2022); Schlicker et al. (2025) TrAM
TI06 Trust Maturity Indicators Hoff & Bashir (2015); Lewicki & Bunker (1995)
TI07 User Control & Agency Smith et al. (2011); Dinev & Hart (2006)
TI08 Privacy Management & Consent Mechanisms Dinev & Hart (2006)
TI09 Identity & Access Management McKnight et al. (2002) structural assurance
TI10 Trustless Systems & Smart Contracts Helbing (2015)
TI11 Privacy-Enhancing Technologies Helbing (2015)
TI12 Adaptive Cybersecurity & Fraud Detection NIST (2023)
TI13 Auditable Algorithms & Open-Source Frameworks Raji et al. (2020)
TI14 Federated Learning & Decentralized Models Thiebes et al. (2021)
TI15 Trust Score Systems & Ratings Hendrikx et al. (2015)
TI16 Data Portability & Interoperability EU AI Act (2024)
TI17 Trust Influencers (Change Management) Botsman (2017)
TI18 Generative AI Disclosures Lukyanenko et al. (2022); Thiebes et al. (2021)
TI19 Algorithmic Recourse & Appeal EU AI Act (2024)
TI20 Data Minimization & Privacy-Preserving Analytics Smith et al. (2011)

10.4 Social Trust Mechanisms (ST01-ST17, 17 cues)

This construct captures community-driven trust: reputation systems, endorsements, moderation, and social proof. It corresponds to what Sollner et al. (2016) term socially-mediated trust and what Pavlou and Gefen (2004) describe as institution-based trust mechanisms in online marketplaces. Hendrikx et al. (2015) provide the most comprehensive taxonomy of reputation systems, classifying them by information source, type, collection method, and aggregation technique.

ID Cue Theoretical Anchor
ST01 Privacy Indicators & Data Access Transparency Dinev & Hart (2006)
ST02 Data Security & Secure Storage McKnight et al. (2002) structural assurance
ST03 Affiliation & Sense of Belonging Lewicki & Bunker (1995) IBT
ST04 Reputation Systems & 3rd-Party Endorsements Pavlou & Gefen (2004); Hendrikx et al. (2015)
ST05 Brand Ambassadors & Influencer Partnerships Hoffmann et al. (2014)
ST06 Customer Testimonials & User-Generated Content Bart et al. (2005)
ST07 Community Moderation & Governance Resnick et al. (2006)
ST08 Social Translucence & “Social Mirror” Erickson & Kellogg (2000)
ST09 Events & Sponsorships Chaudhuri & Holbrook (2001)
ST10 Media Coverage & Press Mentions Holweg, Younger, & Wen (2022)
ST11 Comparative Benchmarks & Reviews Schlicker et al. (2025) cross-validation
ST12 Content Integrity & Misinformation Safeguards Resnick et al. (2006)
ST13 Flagging & Reporting Mechanisms Pavlou & Gefen (2004)
ST14 Community Voting & Collective Decision-Making Floridi & Cowls (2019)
ST15 Block/Ignore & Safe-Space Features Smith et al. (2011)
ST16 Public Interest & Crisis-Response Alerts Botsman (2017)
ST17 Co-creation & Community Engagement Blau (1964)

10.5 Governance, Resilience & Assurance (GOV01-GOV25, 25 cues)

Following NIST AI RMF 1.0 (2023), the EU AI Act (2024), the IIA Three Lines Model (2020), and resilience engineering (Hollnagel, Woods, & Leveson, 2006), and as articulated in Glinz (2025, 2026), the governance cues were derived through axial coding (Strauss & Corbin, 1998) of these external frameworks plus the prior articulations, supplemented by the R1-R5 cross-check. The external frameworks carry the theoretical weight. The full methodology is documented in Governance L2 Cues. Open coding produced 47 initial governance-related codes, which axial coding consolidated into three sub-dimensions.

Adaptive Governance (GOV01-GOV06):

ID Cue Primary Source
GOV01 Principle-Based Trust Foundations WEF (2022); Floridi & Cowls (2019); articulated in Glinz (2025)
GOV02 AI Lifecycle Risk Assessment EU AI Act (2024); FINMA 08/2024
GOV03 Governance Requirements Translation ISO/IEC 27001; NIST AI RMF (2023); articulated in Glinz (2025)
GOV04 Three Lines of Defense & Accountability IIA (2020) Three Lines Model
GOV05 Adaptive Policy & Regulatory Alignment Floridi & Cowls (2019)
GOV06 Cross-Functional Trust Ownership Luhmann (1979); IIA (2020); articulated in Glinz (2025)

Organizational Resilience (GOV07-GOV12):

ID Cue Primary Source
GOV07 Incident Response & Crisis Management Botsman (2017); Kim et al. (2004)
GOV08 Graceful Degradation & Failsafe Design Hollnagel et al. (2006)
GOV09 Anticipatory Monitoring & Early Warning Hollnagel et al. (2006)
GOV10 Operational Continuity & Recovery Hollnagel et al. (2006)
GOV11 Learning from Failures & Near Misses NIST (2023)
GOV12 Adversarial Robustness & Red-Teaming Amodei et al. (2016)

Continuous Digital Assurance (GOV13-GOV25):

ID Cue Primary Source
GOV13 Runtime Monitoring & Drift Detection NIST (2023); Raji et al. (2020)
GOV14 Verifiable Data Governance NIST (2023)
GOV15 Bias & Fairness Auditing EU AI Act (2024); NIST (2023)
GOV16 Transparency Reporting & Explainability EU AI Act (2024)
GOV17 Independent Audit & Third-Party Verification IIA (2020)
GOV18 Stakeholder Engagement & Participatory Oversight Floridi & Cowls (2019)
GOV19 Embedded Compliance & Regulatory Features EU AI Act (2024)
GOV20 LLM Truthfulness & Safety Huang et al. (2024) TrustLLM; NIST AI 600-1
GOV21 Machine Ethics Auditing Floridi & Cowls (2019); EU AI Act (2024)
GOV22 Uncertainty Communication & Expectation Management Ripperger (2003); Luhmann (1979); Schlicker et al. (2025)
GOV23 Environmental Impact Governance & Green AI OECD (2024); IEEE 7010
GOV24 AI Supply Chain & Third-Party Model Governance ISO/IEC 42001; NIST AI RMF
GOV25 Redressability & Individual Remedy Mechanisms WEF (2022); EU AI Act Art. 85-86

10.6 Below-Waterline Cues

Institution-based (IB01-IB04):

ID Cue Focus Theoretical Anchor
IB01 Structural Assurance Belief that legal, regulatory, and technological safeguards protect against risks McKnight et al. (2002)
IB02 Situational Normality Perception that the environment is typical, proper, and conducive to success McKnight et al. (2002)
IB03 Regulatory & Legal Framework Confidence in the enforceability of laws, contracts, and dispute resolution Zucker (1986); Luhmann (1979)
IB04 Intermediary & Platform Trust Trust placed in intermediaries, marketplaces, or platforms that vouch for counterparties Pavlou & Gefen (2004); Sollner et al. (2016)

Trusting Beliefs (TB01-TB07):

The TB construct uses a dual-lens architecture. TB01-TB04 constitute the human-like lens (Mayer et al., 1995), applicable when the trustee is a person, organization, or anthropomorphic AI. TB05-TB07 constitute the system-like lens (Lankton, McKnight, & Tripp, 2015), applicable when the trustee is a technology system, algorithm, or AI product. For AI systems, both lenses may operate simultaneously: the trustor assesses the deploying organization through TB-H and the AI product through TB-S.

ID Cue Lens Focus Theoretical Anchor
TB01 Competence (Ability) Human-like Belief that the trustee has the skills and expertise to fulfill its role Mayer et al. (1995)
TB02 Benevolence Human-like Belief that the trustee genuinely cares about the trustor’s welfare Mayer et al. (1995)
TB03 Integrity Human-like Belief that the trustee adheres to acceptable principles and keeps commitments Mayer et al. (1995)
TB04 Predictability Human-like Belief that the trustee’s behavior is consistent and can be anticipated McKnight et al. (2002)
TB05 Functionality System-like Belief that the system provides the specific functions needed for the task Lankton, McKnight, & Tripp (2015)
TB06 Reliability System-like Belief that the system operates consistently and correctly over time Lankton, McKnight, & Tripp (2015)
TB07 Helpfulness System-like Belief that the system provides adequate and responsive help to the user Lankton, McKnight, & Tripp (2015)

Affective Trusting Beliefs (ATB01-ATB05):

ID Cue Focus Theoretical Anchor
ATB01 Emotional Resonance Degree to which interactions evoke emotional connection and positive affect McAllister (1995); Glikson & Woolley (2020)
ATB02 Perceived Empathy Belief that the trustee understands the trustor’s situation and responds with sensitivity Schlicker et al. (2025); Bickmore & Cassell (2001)
ATB03 Interpersonal Comfort Ease and willingness to engage in interaction, including sharing sensitive information Lewis & Weigert (1985)
ATB04 Affective Attachment Emotional bond from repeated positive interactions, creating loyalty beyond rational comparison Lewicki & Bunker (1995) identification-based trust
ATB05 Relational Interaction Design Designing interactions that build rapport through conversational strategies and sociocultural sensitivity Bickmore & Cassell (2001); Zierau (2021); Van Pinxteren et al. (2023)

Disposition to Trust (DT01-DT04):

ID Cue Focus Theoretical Anchor
DT01 Faith in Humanity General belief that others are well-meaning and reliable McKnight et al. (2002); Rotter (1967)
DT02 Trusting Stance Personal inclination to extend trust unless given reason not to McKnight et al. (2002)
DT03 Risk Propensity Individual willingness to accept vulnerability in uncertain situations Mayer et al. (1995); Sitkin & Pablo (1992)
DT04 Technology Readiness & Prior Experience Comfort with technology shaped by past interactions and familiarity Riedl (2022); Hoff & Bashir (2015)

Trusting Intentions & Behaviors (TIB01-TIB04):

ID Cue Focus Theoretical Anchor
TIB01 Willingness to Depend Readiness to rely on another party for important outcomes McKnight et al. (2002)
TIB02 Information Sharing Behavior Willingness to disclose personal or sensitive data Dinev & Hart (2006)
TIB03 Delegation & Advice Following Willingness to delegate decisions or follow recommendations Lee & See (2004)
TIB04 Transactional Commitment Willingness to make purchases, sign contracts, or engage financially Gefen et al. (2003)

11. Internal Consistency Check (Construct Boundary Validation)

Each L2 cue was examined through a constant-comparison protocol (Glaser & Strauss, 1967; Charmaz, 2006) to test construct boundaries and coverage. This is an internal consistency check, not a claim of formal ontological axiom checking in the Gruber (1993) or Guarino and Welty (2002) sense. MECE (Minto, 1987) is a consulting heuristic without standing in ontology-validation scholarship; the work done here is better described as constant-comparison boundary validation.

Construct boundary distinctness. No two cues within the same construct are intended to overlap in scope. Boundary cases were resolved through explicit scoping decisions documented in the design-decisions table:

Decision Resolution Rationale
GOV03 vs GOV05 overlap GOV03 = initial operationalization; GOV05 = ongoing adaptation Distinct temporal scopes
GOV10/GOV12 merger Original codes for business continuity and operational redundancy merged into “Operational Continuity & Recovery” Conceptual overlap; both about maintaining service under disruption
TI/ST boundary TI = technology-mediated trust; ST = socially-mediated trust Follows Sollner et al. (2016) distinction
R/GOV boundary R = user-facing fairness signals; GOV = organizational oversight processes Distinct audiences (consumer vs. organization)

Coverage of source concepts: all trust-related concepts from the source corpus map to at least one cue. Unmapped concepts were either subsumed under existing cues or triggered new cue creation (e.g., GOV18 Stakeholder Engagement was added when participatory governance was identified as a gap; consistent with Floridi & Cowls (2019) and the participatory-oversight principle in the Swiss e-ID referendum discussion articulated in Glinz (2025)).


12. Preliminary Evidence and Status of Empirical Validation

The present work establishes construct validity through theoretical grounding and internal consistency checking. Independent empirical validation of the full framework (predictive validation through SEM, CFA, or prospective behavioral studies) is a priority for future research (see Section 16 Limitations and Section 18 Future Work). The following studies provide preliminary evidence consistent with the framework’s above/below waterline distinction and assessment-process architecture.

12.1 Preliminary Evidence from Adjacent Trust Literature

The above/below waterline distinction (visible cues driving hidden beliefs, in turn driving intentions) is consistent with the dual-pathway findings of Kim, Ferrin, and Rao (2008), the TAM-trust integration of Gefen, Karahanna, and Straub (2003), and the meta-analytic synthesis in Beldad, de Jong, and Steehouder (2010). For human-AI and human-automation trust specifically, Kaplan, Kessler, Brill, and Hancock (2023) provide a meta-analysis in Human Factors consistent with the multi-source cue architecture used here. These are not direct tests of the Iceberg Trust Model; they are convergent findings in the adjacent literature.

12.2 Schlicker et al. (2025): TrAM and Virtual Doctor Study (qualitative, N=65)

The Trustworthiness Assessment Model (TrAM) and the accompanying qualitative study are consistent with the above/below waterline distinction and elaborate the assessment processes the framework captures:

12.3 Hoffmann, Lutz, and Meckel (2014): Grounding for Decisions 2 and 3

Hoffmann, Lutz, and Meckel (2014, Journal of Management Information Systems) report an SEM study of online trust among German Internet users. That study is cited as grounding for the construct-level decisions to elevate Brand and Reciprocity to distinct L1 constructs (Section 6.2, Decisions 2 and 3); it is not claimed here as independent empirical validation of the present framework, because using grounding sources as validation sources is a discovery-confirmation circularity (Meehl, 1978). The specific path coefficients and sample size in that study should be verified against the primary source for any downstream publication.


13. Database Schema

The framework is stored in Supabase PostgreSQL across 17 tables. The legacy table-namespace prefix is digital_trust_ontology_*; this is a code-level artifact that predates the present terminology reframe and is not a theoretical claim about formal ontology status.

13.1 Entity Tables

erDiagram
    LAYERS ||--o{ CONSTRUCTS : contains
    CONSTRUCTS ||--o{ CUES : "has L2 cues"
    CONSTRUCTS }o--o{ CONSTRUCT_CUE_MAP : maps
    CUES }o--o{ CONSTRUCT_CUE_MAP : maps

    LAYERS {
        text layer_id PK
        text layer_name
        int layer_order
        text description
        text position
        text color_hex
    }

    CONSTRUCTS {
        text construct_id PK
        text construct_name
        boolean above_waterline
        boolean has_l2_cues
        text description
        text layer_id FK
        int sort_order
    }

    CUES {
        text cue_id PK
        text cue_name
        text construct_id FK
        text definition
        text rationale
        text engender_description
        text erode_description
        int sort_order
    }
Table Rows Description
digital_trust_ontology_layers 4 Agency, Engineering, Governance, Institutional
digital_trust_ontology_constructs 10 5 above waterline, 5 below waterline
digital_trust_ontology_cues 124 20 R + 18 B + 20 TI + 17 ST + 25 GOV (above waterline) + 4 IB + 7 TB + 5 ATB + 4 DT + 4 TIB (below waterline). TB carries 7 cues: 4 human-like (Mayer ABI+P) + 3 system-like (Lankton FRH). All IDs consecutive within each construct.
digital_trust_ontology_violation_types 7 Competence-Based, Integrity-Based, Benevolence-Based, etc.
digital_trust_ontology_response_strategies 9 Denial, Apology, Reparations, etc.
digital_trust_ontology_industries 14 Enumerated industry sectors
digital_trust_ontology_ai_failure_types 7 Bias, Privacy, Safety, Explainability, etc.
digital_trust_ontology_severity_levels 5 1 (Minor) through 5 (Catastrophic)

13.2 Mapping Tables (9)

Mapping tables link trust incidents to framework entities using a standard schema: id UUID PK, source_id, target_id, notes, confidence, created_at, UNIQUE(source_id, target_id).

Table Relationship
_construct_cue_map Static taxonomy: construct to its L2 cues
_incident_construct_map Incident to affected L1 constructs
_incident_cue_map Incident to affected L2 cues
_incident_mitigation_construct_map Incident to mitigation L1 constructs
_incident_mitigation_cue_map Incident to mitigation L2 cues
_incident_violation_map Incident to violation type
_incident_response_map Incident to response strategy
_incident_industry_map Incident to industry
_incident_failure_type_map Incident to AI failure type

13.3 RPCs

Two PostgreSQL functions (SECURITY DEFINER) provide efficient data access:


14. Knowledge Graph Visualization

The platform provides three visualization modes accessible via the Iceberg View tab:

14.1 Beeswarm Mode

Animated dots representing L2 cues float within their parent construct’s polygon region on the iceberg SVG. Dots bounce off the actual construct boundaries (polygon edge detection via ray casting). Hover reveals cue details; click opens a detail sheet.

14.2 Radial Burst Mode

Cues radiate outward from the centroid of each construct region along spoke lines. Each spoke terminates at a cue node labeled with its ID. Endpoints are clamped to stay within the construct polygon.

14.3 Treemap Mode

Each construct region is subdivided into a grid of cue cells filling the bounding box. Cell size varies per construct (based on the number of cues and the available area) so that every cue is always visible. Hover shows the cue name and definition; click opens the detail sheet.

14.4 Force-Directed Graph

The Overview & Graph tab renders a force-directed SVG graph with custom physics simulation. Node types: constructs (above/below waterline), L2 cues (clustered around parents), violation types, response strategies, industries, AI failure types, severity levels. Edges connect constructs to their cues.

Features:


15. Table Editors

Each entity table is editable through the UI:

All tables support: search, add/edit/delete, CSV export.


16. Export Capabilities


17. Data Sources

The framework seed data is derived from:


18. Future Work and Access Control

18.1 Future Work

18.2 Access Control

The knowledge graph is accessible only in the digital_trust_audit catalog. Navigation entry: “Trust Ontology” (Network icon) in the sidebar, visible only when the user’s active catalog is digital_trust_audit. The table-namespace digital_trust_ontology_* is the code-level artifact and is not intended as a theoretical claim about formal ontology status.

Route: /digital-trust-knowledge

RLS policies: anon and authenticated roles have full read/write access on all 17 database tables.


16. Limitations

This section consolidates the known limitations of the framework development methodology.

Single-coder analysis. Coding was performed by a single researcher. The documented constant-comparison protocol, paradigm mapping, and construct-boundary validation (Section 11) provide partial mitigation, but formal inter-rater reliability cannot be reported. Future work will include expert panel validation (Delphi method) to establish content validity, and independent dual coding of a random source subset (see Section 18).

Role of author’s prior work. The R1-R5 framework that organized the literature selection is articulated in the author’s prior work (Glinz, 2015, 2025, 2026). Prior author articulations are cited as announcement documents; the theoretical weight is carried by external, independently validated sources (Mayer et al., 1995; McKnight et al., 2002; Lankton et al., 2015; Sollner et al., 2016; NIST, 2023; EU AI Act, 2024; Hollnagel et al., 2006; and others). The R1-R5 framework is a selection scaffold, not a theoretical contribution claimed as load-bearing by the present work.

Governance construct derivation. The Governance, Resilience & Assurance (GOV) construct was developed through a design-science process (Hevner, March, Park, & Ram, 2004) synthesizing four independent regulatory and professional frameworks: NIST AI RMF 1.0 (2023), the EU AI Act (2024), the IIA Three Lines Model (2020), and resilience engineering principles (Hollnagel, Woods, & Leveson, 2006). Prior author contributions (Glinz, 2025, 2026) articulated the synthesis; the present work operationalizes it as 25 L2 cues. Independent external validation of the GOV construct is a priority for future work (see Section 18).

Conceptual coverage, not theoretical saturation. Section 4.2 and the audit trail (Section 9) report a conceptual coverage assessment: the cumulative count of distinct axial categories ceased to grow as additional sources were coded. This is not theoretical saturation in the Glaserian sense, which would require iterative theoretical sampling driven by ongoing analysis. The coverage assessment was reconstructed from the traceability matrix rather than tracked prospectively during coding.

ITI Questionnaire v8 is an instrument under development. The ITI Questionnaire v8 (2025) is an internal instrument under development by the author; it has not been administered to an external sample. In the present work it functioned as a structured prompt for cue derivation only. Claims of practitioner-academic triangulation are deferred until the instrument has been externally validated (pilot, EFA, CFA), which is the subject of a forthcoming paper.

No predictive validation. The framework establishes construct validity through theoretical grounding and internal consistency checking. Predictive validity (whether the framework predicts real-world trust outcomes such as user behavior, market impact, or regulatory action) has not been tested and requires separate empirical investigation (see Section 18).

Construct, not formal ontology. The work is a multi-level conceptual framework and classification scheme, not a formal ontology in the Gruber (1993) / Guarino and Welty (2002) sense. Formal axiomatization, OntoClean metaproperty analysis, and OWL/RDFS representation are identified as future work. The database namespace digital_trust_ontology_* is a legacy code artifact and should not be read as a claim of formal ontology status.

LLM-Assistance Disclosure

Draft composition and copy-editing used Claude (Anthropic) and ChatGPT (OpenAI). All factual claims, source attributions, and analytical decisions were verified against primary sources by the author. Coding decisions (category formation, boundary resolution, construct consolidation) were made by the author, not the LLM. The author takes full responsibility for the final text.


References

Acquisti, A., Brandimarte, L., & Loewenstein, G. (2015). Privacy and human behavior in the age of information. Science, 347(6221), 509-514.

Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179-211.

Al-Eisawi, D. (2022). A design framework for novice researchers using grounded theory methodology and coding in qualitative research. International Journal of Qualitative Methods, 21, 1-16.

Amodei, D., et al. (2016). Concrete problems in AI safety. arXiv:1606.06565.

Ashworth, L., & Free, C. (2006). Marketing dataveillance and digital privacy. Journal of Business Ethics, 67(2), 107-123.

Barnett-Page, E., & Thomas, J. (2009). Methods for the synthesis of qualitative research: A critical review. BMC Medical Research Methodology, 9, 59.

Bart, Y., Shankar, V., Sultan, F., & Urban, G. L. (2005). Are the drivers and role of online trust the same for all web sites and consumers? Journal of Marketing, 69(4), 133-152.

Beldad, A., de Jong, M., & Steehouder, M. (2010). How shall I trust the faceless and the intangible? A literature review on the antecedents of online trust. Computers in Human Behavior, 26(5), 857-869.

Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity, and social history. Games and Economic Behavior, 10(1), 122-142.

Bickmore, T. W., & Cassell, J. (2001). Relational agents: A model and implementation of building user trust. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’01), 396-403.

Blau, P. M. (1964). Exchange and power in social life. Wiley.

Booth, A. (2006). “Brimful of STARLITE”: Toward standards for reporting literature searches. Journal of the Medical Library Association, 94(4), 421-429.

Botsman, R. (2017). Who can you trust? PublicAffairs.

Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. Sage.

Chaudhuri, A., & Holbrook, M. B. (2001). The chain of effects from brand trust and brand affect to brand performance. Journal of Marketing, 65(2), 81-93.

Choung, H., David, P., & Ross, A. (2022). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human-Computer Interaction.

Cialdini, R. B. (2001). Influence: Science and practice (4th ed.). Allyn & Bacon.

Corbin, J., & Strauss, A. (2015). Basics of qualitative research: Techniques and procedures for developing grounded theory (4th ed.). Sage.

de Visser, E. J., Peeters, M. M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., & Neerincx, M. A. (2020). Towards a theory of longitudinal trust calibration in human-robot teams. International Journal of Social Robotics, 12(2), 459-478.

Denzin, N. K. (1978). The research act: A theoretical introduction to sociological methods (2nd ed.). McGraw-Hill.

Deutsch, M. (1976). The resolution of conflict. Yale University Press.

Dinev, T., & Hart, P. (2006). An extended privacy calculus model for e-commerce transactions. Information Systems Research, 17(1), 61-80.

Erickson, T., & Kellogg, W. A. (2000). Social translucence: An approach to designing systems that support social processes. ACM Transactions on Computer-Human Interaction, 7(1), 59-83.

Erdem, T., & Swait, J. (1998). Brand equity as a signaling phenomenon. Journal of Consumer Psychology, 7(2), 131-157.

Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1).

Fukuyama, F. (1995). Trust: The social virtues and the creation of prosperity. Free Press.

Gefen, D. (2000). E-commerce: The role of familiarity and trust. Omega, 28(6), 725-737.

Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online shopping: An integrated model. MIS Quarterly, 27(1), 51-90.

Gefen, D., & Pavlou, P. A. (2012). The boundaries of trust and risk. Information Systems Research, 23(3-Part-2), 940-959.

Giffin, K. (1967). The contribution of studies of source credibility to a theory of interpersonal trust in the communication process. Psychological Bulletin, 68(2), 104-120.

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Aldine.

Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence. Academy of Management Annals, 14(2), 627-660.

Gruber, T. R. (1993). A translation approach to portable ontology specifications. Knowledge Acquisition, 5(2), 199-220.

Guarino, N., & Welty, C. A. (2002). Evaluating ontological decisions with OntoClean. Communications of the ACM, 45(2), 61-65.

Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P., Kyriakidou, O., & Peacock, R. (2005). Storylines of research in diffusion of innovation: A meta-narrative approach to systematic review. Social Science & Medicine, 61(2), 417-430.

Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18(1), 59-82.

Glinz, D. (2015). Center for Digital Business Yearbook 2015. buch & netz.

Glinz, D. (2025). The Architecture of Digital Trust: A Multi-Level Framework for Closing the AI Value Gap. Chapter 9 Preprint.

Glinz, D. (2026). The Architecture of Digital Trust: A Multi-Level Framework for Bridging the AI Value Gap. SDS2026.

Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75-105.

Helbing, D. (2015). Big data, privacy, and trusted web: What needs to be done. In D. Helbing, Thinking ahead: Essays on big data, digital revolution, and participatory market society (pp. 141-156). Springer.

Hendrikx, F., Bubendorfer, K., & Chard, R. (2015). Reputation systems: A survey and taxonomy. Journal of Parallel and Distributed Computing, 75, 184-197.

Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407-434.

Hoffmann, C. P., Lutz, C., & Meckel, M. (2014). Digital natives or digital immigrants? The impact of user characteristics on online trust. Journal of Management Information Systems, 31(3), 138-171.

Doney, P. M., Cannon, J. P., & Mullen, M. R. (1998). Understanding the influence of national culture on the development of trust. Academy of Management Review, 23(3), 601-620.

Hofstede, G. (2001). Culture’s consequences (2nd ed.). Sage.

Hollnagel, E., Woods, D. D., & Leveson, N. (Eds.). (2006). Resilience engineering. Ashgate.

Jick, T. D. (1979). Mixing qualitative and quantitative methods: Triangulation in action. Administrative Science Quarterly, 24(4), 602-611.

Kaplan, A. D., Kessler, T. T., Brill, J. C., & Hancock, P. A. (2023). Trust in artificial intelligence: Meta-analytic findings. Human Factors, 65(2), 337-359.

Holweg, M., Younger, R., & Wen, S. (2022). The reputational risks of AI. California Management Review, 65(1), 5-30.

Huang, Y., et al. (2024). TrustLLM: Trustworthiness in large language models. Proceedings of Machine Learning Research.

Institute of Internal Auditors. (2020). The IIA’s Three Lines Model.

ISACA. (2024). Digital Trust Ecosystem Framework (DTEF).

Kim, D. J., Ferrin, D. L., & Rao, H. R. (2008). A trust-based consumer decision-making model in electronic commerce. Decision Support Systems, 44(2), 544-564.

Kim, P. H., Ferrin, D. L., Cooper, C. D., & Dirks, K. T. (2004). Removing the shadow of suspicion. Journal of Applied Psychology, 89(1), 104-118.

Koufaris, M., & Hampton-Sosa, W. (2004). The development of initial trust in an online company by new customers. Information and Management, 41(3), 377-397.

Lahusen, C., et al. (2024). Trust, trustworthiness and AI governance. Scientific Reports, 14, Article 19368.

Lankton, N. K., McKnight, D. H., & Tripp, J. (2015). Technology, humanness, and trust. Journal of the Association for Information Systems, 16(10), 880-918.

Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80.

Lewicki, R. J., & Brinsfield, C. T. (2017). Trust repair. Annual Review of Organizational Psychology, 4, 287-313.

Lewicki, R. J., & Bunker, B. B. (1995). Trust in relationships: A model of development and decline. In B. B. Bunker & J. Z. Rubin (Eds.), Conflict, cooperation, and justice (pp. 133-173). Jossey-Bass.

Lewicki, R. J., McAllister, D. J., & Bies, R. J. (1998). Trust and distrust. Academy of Management Review, 23(3), 438-458.

Lewis, J. D., & Weigert, A. (1985). Trust as a social reality. Social Forces, 63(4), 967-985.

Luhmann, N. (1979). Trust and power. Wiley.

Lukyanenko, R., Maass, W., & Storey, V. C. (2022). Trust in AI: From a foundational trust framework to emerging research opportunities. Electronic Markets, 32(4), 1993-2020.

Makridis, C. A., et al. (2024). From theory to practice: Harmonizing taxonomies of trustworthy AI. Health Policy Open, 7, 100128.

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734.

McAllister, D. J. (1995). Affect- and cognition-based trust. Academy of Management Journal, 38(1), 24-59.

Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46(4), 806-834.

Minto, B. (1987). The Pyramid Principle: Logic in Writing and Thinking. Pitman.

McKnight, D. H., & Chervany, N. L. (2001). Trust and distrust definitions. In R. Falcone et al. (Eds.), Trust in cyber-societies (pp. 27-54). Springer.

McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust measures for e-commerce. Information Systems Research, 13(3), 334-359.

McKnight, D. H., Cummings, L. L., & Chervany, N. L. (1998). Initial trust formation in new organizational relationships. Academy of Management Review, 23(3), 473-490.

Muir, B. M. (1994). Trust in automation: Part I. Ergonomics, 37(11), 1905-1922.

National Institute of Standards and Technology. (2023). AI Risk Management Framework (AI RMF 1.0).

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., … & Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71.

Pare, G., Trudel, M.-C., Jaana, M., & Kitsiou, S. (2015). Synthesizing information systems knowledge: A typology of literature reviews. Information & Management, 52(2), 183-199.

Pavlou, P. A., & Gefen, D. (2004). Building effective online marketplaces with institution-based trust. Information Systems Research, 15(1), 37-59.

Qureshi, H. A., & Unlu, Z. (2020). Beyond the paradigm conflicts: A four-step coding instrument for grounded theory. International Journal of Qualitative Methods, 19, 1-10.

Raji, I. D., et al. (2020). Closing the AI accountability gap. FAccT ’20, 33-44.

Reeder, G. D., & Brewer, M. B. (1979). A schematic model of dispositional attribution in interpersonal perception. Psychological Review, 86(1), 61-79.

Resnick, P., Zeckhauser, R., Swanson, J., & Lockwood, K. (2006). The value of reputation on eBay. Experimental Economics, 9(2), 79-101.

Riedl, R. (2022). Is trust in artificial intelligence systems related to user personality? Electronic Markets, 32(4), 2021-2051.

Ripperger, T. (2003). Okonomik des Vertrauens: Analyse eines Organisationsprinzips (2nd ed.). Mohr Siebeck.

Rotter, J. B. (1967). A new scale for the measurement of interpersonal trust. Journal of Personality, 35(4), 651-665.

Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998). Not so different after all. Academy of Management Review, 23(3), 393-404.

Saldana, J. (2021). The coding manual for qualitative researchers (4th ed.). Sage.

Schlicker, N., Baum, K., Uhde, A., Sterz, S., Hirsch, M. C., & Langer, M. (2025). How do we assess the trustworthiness of AI? Computers in Human Behavior. https://doi.org/10.1016/j.chb.2025.108671

Schlicker, N., Lechner, F., Wehrle, K., Greulich, B., Hirsch, M. C., & Langer, M. (2025). Trustworthy enough? Technology, Mind, and Behavior. https://doi.org/10.1037/tmb0000164

Schuetz, S., Kuai, L., Lacity, M. C., & Steelman, Z. (2025). A qualitative systematic review of trust in technology. Journal of Information Technology, 40(1), 4.

Smith, H. J., Dinev, T., & Xu, H. (2011). Information privacy research. MIS Quarterly, 35(4), 989-1015.

Sollner, M., Hoffmann, A., & Leimeister, J. M. (2016). Why different trust relationships matter for information systems users. European Journal of Information Systems, 25(3), 274-287.

Spence, M. (1973). Job market signaling. Quarterly Journal of Economics, 87(3), 355-374.

Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Sage.

Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Techniques and procedures for developing grounded theory (2nd ed.). Sage.

Szalma, J. L., & Taylor, G. S. (2011). Individual differences in response to automation: The five factor model of personality. Journal of Experimental Psychology: Applied, 17(2), 71-96.

Thiebes, S., Lins, S., & Sunyaev, A. (2021). Trustworthy artificial intelligence. Electronic Markets, 31(2), 447-464.

Tomlinson, E. C., & Mayer, R. C. (2009). The role of causal attribution dimensions in trust repair. Academy of Management Review, 34(1), 85-104.

Tomlinson, E. C., Nelson, J. S., & Langlinais, L. A. (2020). A cognitive process model of trust repair. International Journal of Conflict Management, 31(5), 781-800.

Van Pinxteren, M. M. E., Wetzels, R. W. H., Ruger, J., Pluymaekers, M., & Wetzels, M. (2019). Trust in humanoid robots. Journal of Services Marketing, 33(4), 507-518.

Vossing, M., Kuhl, N., Lind, M., & Satzger, G. (2022). Designing transparency for effective human-AI collaboration. Information Systems Frontiers, 24(3), 877-895.

World Economic Forum. (2022). Earning digital trust: Decision-making for trustworthy technologies.

Zierau, N., Engel, C., Satzger, G., & Schwabe, G. (2021). On the design of and interaction with conversational agents. Journal of the Association for Information Systems, 22(3), 791-823.

Wolfswinkel, J. F., Furtmueller, E., & Wilderom, C. P. M. (2013). Using grounded theory as a method for rigorously reviewing literature. European Journal of Information Systems, 22(1), 45-55.

Zucker, L. G. (1986). Production of trust: Institutional sources of economic structure, 1840-1920. Research in Organizational Behavior, 8, 53-111.