Our view on trust


Trust plays important roles in diverse decentralized environments, including our society at large. Computational trust models are a projection of the human notion of trust into the digital world. It has various applications, such as guiding users' judgements in online auction sites about other users; or determine quality of contributions in web 2.0 sites.

We view trust as a measure or assessment that someone (we will refer it as an 'agent') makes regarding the outcome of an interaction with another agent. More concretely, quoting Gambetta "Trust (or, symmetrically, distrust) is a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he can monitor such action (or independently of his capacity ever to be able to monitor or enforce it) and in a context in which it affects his own action." While the computation is thus inherently quantitative in nature, the interpretation can be qualitative, for example based on a threshold, depending on an agent's apatite for risk, or translated into a recommendation or ranking mechanism to choose a (subset of) interaction partner(s) from a pool of potential interactions.

Think local, act global


Most existing trust models require historical information about past behavior of the specific agent being evaluated - information that is not always available in large-scale and decentralized environments. Information of past behavior of an agent is generally derived based on either direct experiences, or indirectly, for example, using a web of trust, or by aggregation at a global scale, often interpreted as the agent's `reputation'.

We explore the possibility to determine the trustworthiness of an agent based on various meta-information (besides historical information) that may be available. The intent is not so much to compete and outperform existing mechanisms leveraging purely on historical information, but rather to complement these. In many circumstances, access to such global/historical information may simply not be available. Even if historical information is available, the other meta-information may nonetheless provide additional useful clues. Likewise, an assessing agent's own past interactions with other agents provides it opportunities to `learn' how to discern and interpret such meta-information. Such subjective interpretation by individual assessors provides the scope for personalization, while the context in which the assessment is being made, and corresponding interpretation of associated meta-information provides the scope for context dependent assessment. Our works are based on these basic instincts, facilitating personalized, context dependent trust models that may be applied even in the absence of information that is generally needed by most other trust models, by leveraging instead on locally available information.

It is worth reiterating at this juncture, that the models we propose may/not work under different circumstances, depending on what meta-information can/not be obtained, and whether they indeed contain any useful information regarding the context for which the trust assessment is being carried out. Furthermore, if historical information is also available, it may outperform our models, or conversely, augmenting our model with such additional information can improve its performance. Thus to summarize, the maxim `one size does not fit all' is true for our trust models as much as any other existing models.

Basic concepts and trust models

StereoTrust: Using stereotypes for trust


In real life interactions among users, in order to make the first guess about the trustworthiness of a stranger, we commonly use our 'instinct' - essentially stereotypes developed from our past interactions with 'similar' people. We propose StereoTrust, a computational trust model inspired by real life stereotypes. An agent forms stereotypes using its previous transactions with other agents. A stereotype contains certain features of agents and an expected outcome of the transaction. These features can be taken from agents' profile information, or agents' observed behavior in the system. When facing a stranger, the stereotypes matching stranger's profile are aggregated to derive its expected trust. Additionally, when some information about stranger's previous transactions are available, StereoTrust uses it to refine the stereotype matching. Based on experiments on both real world data sets as well as synthetic data sets, StereoTrust compares favorably with existing trust models that use different kind of information and more complete historical information. Moreover, because evaluation in Stereotrust is done according to users' personal stereotypes, the system is completely distributed and the result obtained is personalized. StereoTrust can be used as a complimentary mechanism, for example to provide the initial trust value for a stranger (bootstrap), especially when there is no trusted, common third parties.

MetaTrust: A machine learning framework


Following up on our work on StereoTrust, we propose MetaTrust, a generic, machine learning approach based trust framework where an agent uses its own previous transactions (with other agents) to build a knowledge base, and utilize this to assess the trustworthiness of a transaction based on associated features, which are capable of distinguishing successful transactions from unsuccessful ones. These features are harnessed using appropriate machine learning algorithms to extract relationships between the potential transaction and previous transactions. The trace driven experiments using real auction dataset show that this approach provides good accuracy and is highly efficient compared to other trust mechanisms, especially when historical information of the specific agent is rare, incomplete or inaccurate. The fundamental difference between MetaTrust and StereoTrust is MetaTrust's ability to automatically discern and determine which meta-information are (how much) relevant in determining trust for any given context.


We have applied our trust models in different application domains, and sometime in different manner (in some cases, we have used the model to design recommendation systems), and continue to explore further opportunities. Below we summarize some of these.

Online Auctions


Encountering unknown sellers is very common in online auction sites. In such a scenario, a buyer can not estimate trustworthiness of the unknown seller based on the seller's past behavior. The buyer is thus exposed to the risks of being cheated. In this paper we describe a stereotypes based mechanism to determine the risk of a potential transaction even if the seller is personally unknown to not only the buyer but also to the rest of the system. Specifically, our approach first identifies discriminating attributes which are capable of distinguishing successful transactions from unsuccessful ones. A buyer can use its own past transactions (with other sellers) to form such stereotypes. Alternatively, the community's collective knowledge can also be used to build such stereotypes. When posed to a potential transaction with an unknown seller, buyers can estimate trustworthiness (and thus the risk) by combining the corresponding stereotypes. We report experiments over real auction data collected from Allegro, a leading auction site in Eastern Europe. We leverage such analytics to provide a browser (Firefox) based tool to guide buyers during live auctions.

P2P Storage Systems


Peer-to-peer storage services are a cost-effective alternative for data backup. A basic question that arises in the design of such systems is: In which peers do we store redundant data? Choosing appropriate peers for data backup is important at a microscopic level, from an end-user's perspective to guarantee good performance, e.g., quick access, high availability, etc., as well as at a macroscopic level, e.g., for system optimization, fairness, etc. Existing systems apply different techniques, including random selection, based on a distributed hash table (DHT) or based on the peers' past availability pattern. In this paper, we propose as an alternative, a contextual trust based data placement scheme to select suitable data holders. It is originally designed for and applicable to scenarios where there is inadequate historical information about peers, a common scenario in large-scale systems. Specifically, our scheme estimates trustworthiness of a peer based on stereotypes, formed by aggregating information of interactions with other (similar) peers. Simulation experiments show that our placement scheme outperforms not only random selection but also schemes using historical information, in terms of both achieved data availability as well as bandwidth overheads to sustain the system.



Cloud computing has emerged as a popular paradigm that offers computing resources (e.g. CPU, storage, bandwidth, software) as scalable and on-demand services over the Internet. As more players enter this emerging market, a heterogeneous cloud computing market is expected to evolve, where individual players will have different volumes of resources, and will provide specialized services, and with different levels of quality of services. It is expected that service providers will thus, besides competing, also collaborate to complement their resources in order to improve resource utilization and combine individual services to offer more complex value chains and end-to-end solutions required by the customers. It is challenging to select suitable partners in a decentralized setting due to various factors such as lack of global coordination or information, as well as diversity and scale. Trust is known to play an important role in promoting cooperation in many decentralized settings including the society at large, as well as on the Internet, e.g., in e-commerce, etc. In this paper, we explore how trust can promote collaboration among service providers. The novelty of our approach is a framework to combine disparate trust information - from direct interactions and from (indirect) references among service providers, as well as from customer feedbacks, depending on availability of these different kinds of information. Doing so provides decision making guidance to service providers to initialize collaborations by selecting trustworthy partners. Simulation results demonstrate the promise of our approach by showing that compared to random selection, our proposal can help effectively select trustworthy collaborators to achieve better quality of services.



Phd Thesis


  • StereoTrust: A group based personalized trust model,
    Xin Liu, Anwitaman Datta, Krzysztof Rzadca, Ee-Peng Lim
    CIKM 2009, 18th ACM conference on Information and knowledge management.
  • MetaTrust: Discriminant Analysis of Local Information for Global Trust Assessment, (extended abstract)
    Xin Liu, Gilles Tredan, Anwitaman Datta
    AAMAS 2011, The 10th International Conference on Autonomous Agents & Multiagent Systems.
  • A trust prediction approach capturing agents' dynamic behavior,
    Xin Liu, Anwitaman Datta
    IJCAI 2011, International Joint Conferences on Artificial Intelligence.
  • Modeling Context Aware Dynamic Trust Using Hidden Markov Model,
    Xin Liu, Anwitaman Datta
    AAAI 2012, Twenty-Sixth AAAI Conference on Artificial Intelligence.
  • Detecting Imprudence of 'Reliable' Sellers in Online Auction Sites,
    Xin Liu, Anwitaman Datta, Hui Fang, Jie Zhang
    TrustCom 2012, The 11th IEEE International Conference on Trust, Security and Privacy in Computing and Communications.
  • Trust beyond reputation: A computational trust model based on stereotypes,
    Xin Liu, Anwitaman Datta, Krzysztof Rzadca
    Elsevier ECRA, Electronic Commerce Research and Applications Journal. Preprint on arXiv.
  • A generic trust framework for large-scale open systems using machine learning,
    Xin Liu, Gilles Tredan, Anwitaman Datta
    Computational Intelligence (Wiley Journal), Preprint on arXiv.


  • Contextual Trust Aided Enhancement of Data Availability in Peer-to-Peer Backup Storage Systems,
    Xin Liu, Anwitaman Datta
    J. of Network and Systems Management, 20(2), 2012.
  • On trust guided collaboration among cloud service providers,
    Xin Liu, Anwitaman Datta
    TrustCol 2010, The Fifth International Workshop on Trusted Collaboration (with CollaborateCom 2010).
  • Using Stereotypes to Identify Risky Transactions in Internet Auctions
    Xin Liu, Tomasz Kaszuba, Radoslaw Nielek, Anwitaman Datta, Adam Wierzbicki (work done in equal partnership with researchers at PJIIT, Poland)
    SocialCom 2010, Second IEEE International Conference on Social Computing.
  • Trust and Fairness Management in P2P and Grid systems,
    Adam Wierzbicki, Tomasz Kaszuba, Radoslaw Nielek, Anwitaman Datta (work mainly done by researchers at PJIIT, Poland)
    Handbook of Research on P2P and Grid Systems for Service-Oriented Computing: Models, Methodologies and Applications, IGI Global.
  • Improving computational trust representation based on Internet auction traces,
    Adam Wierzbicki, Tomasz Kaszuba, Radoslaw Nielek, Paulina Adamska, Anwitaman Datta (work mainly done by researchers at PJIIT, Poland)
    Decision Support Systems Journal, 2013.

Overview articles

Drupal 6 Appliance - Powered by TurnKey Linux