Radu Jurca                             research


Truthful Reputation Mechanisms for Online Systems

Overview

The availability of ubiquitous communication through the Internet is driving the migration of business transactions from direct contact between people to electronically mediated interactions. People interact electronically either through human-computer interfaces or through programs representing humans, so-called agents. In either case, no physical interactions among entities occur, and the systems are much more susceptible to fraud and deception.

Traditional methods to avoid cheating involve cryptographic schemes and trusted third parties that overlook every transaction. Such systems are very costly, introduce potential bottlenecks, and may be difficult to deploy due to the complexity and heterogeneity of most online environments: e.g., agents in different geographical locations may be subject to different legislation, or different interaction protocols.

Reputation mechanisms offer a novel and effective way of ensuring the necessary level of trust which is essential to the functioning of any market. They collect information about the history (i.e., past transactions) of market participants and make public their reputation. Prospective partners guide their decisions by considering reputation information, and thus make more informative choices.

Online reputation mechanisms enjoy huge success. They are implemented by most e-commerce sites available today, and are seriously taken into consideration by human users: numerous empirical studies emphasize the existence of reputation premiums - providers with higher reputation can charge higher prices. Nonetheless, the formal investigation of reputation mechanisms is still a young research area.

Incentive-Compatible Signaling Reputation Mechanisms

The main function of signaling Reputation Mechanisms is to approximate as well as possible the fixed, but unknown characteristics of a product or service that is repeatedly consumed by a group of users. Reputation information is computed by iteratively integrating individual feedback as prescribed by the comprehensive literature on the theory of learning. However, the effectiveness of such mechanisms is conditioned on obtaining honest feedback. Notorious examples regarding eBay or Amazon feedback have made it clear that users do not always find it in their best interest to report the truth. Within this line of research, we:

  • provide explicit payment schemes (rewards paid for feedback reports) such that rational agents maximize their revenue by reporting the truth. When such payments are added to the reputation mechanism, honest reporting becomes a Nash equilibrium;
  • use automated mechanism design techniques to compute optimal payments;
  • advocate for the use of filtering mechanisms in conjunction with payment schemes. By filtering out reports that are likely to be false, incentive-compatible payments can be substantially reduced;
  • provide techniques for eliminating undesired (i.e., other than the truthful) equilibria;
  • provide guidelines for designing payment schemes that are robust against lying coalitions of a certain size;

Selected papers:

Incentive-Compatible Sanctioning Reputation Mechanisms

Sanctioning reputation mechanisms, on the other hand, are mainly used to encourage cooperative behavior in environments with moral hazard. Providers are equally capable to deliver good service, but doing so requires costly effort. The role of the reputation mechanism is to expose malicious providers, and label them with a bad reputation. When the loss incurred by not cheating in the present is offset by the expected gains due to future transactions in which the agent has a higher reputation, cooperation becomes a stable equilibrium.

For this class of mechanisms, honest reporting can be motivated by the repeated presence of the client in the market. We describe a simple mechanism where the feedback reported by the client is confronted against a self-report made by the provider. We show that there is an equilibrium where all transactions (and reports) are honest, and give upper bounds on the amount of false information recorded by the reputation mechanism in any other equilibrium.

Selected papers:

Novel Applications of Reputation Mechanisms

One promising application area for reputation mechanisms is to monitor Quality of Service (QoS) parameters in markets of web services. Service-level agreements (SLAs) establish a contract between service providers and clients concerning QoS parameters. Without proper penalties, service providers have strong incentives to deviate from the advertised QoS, causing losses to the clients. Reliable QoS monitoring (and proper penalties computed on the basis of delivered QoS) are therefore essential for the trustworthiness of a service-oriented environment. Instead of traditional monitoring techniques, we use quality ratings from the clients to estimate the delivered QoS. A reputation mechanism collects the ratings and computes the actual quality delivered to the clients. The mechanism provides incentives for the clients to report honestly, and pays special attention to minimizing cost and overhead.

More details.

Selected papers:

Understanding Real Feedback Forums

Recent analysis raises important questions regarding the ability of existing feedback forums to reflect the real quality of a product. In the absence of clear incentives, users with a moderate outlook will not bother to voice their opinions, which leads to an unrepresentative sample of reviews. For example, Amazon ratings of books or CDs follow with great probability bi-modal, U-shaped distributions where most of the ratings are either very good, or very bad. Controlled experiments, on the other hand, reveal opinions on the same items that are normally distributed. Under these circumstances, using the arithmetic mean to predict quality (as most forums actually do) gives the typical user an estimator with high variance that is often false. Improving the way we aggregate the information available from online reviews requires a deep understanding of the underlying factors that bias the rating behavior of users.

Selected papers:


Projects and collaborations:


Last modified: Feb 15, 2008