Truthful Reputation Mechanisms for Online SystemsOverviewThe availability of ubiquitous communication through the Internet is driving the migration of business transactions from direct contact between people to electronically mediated interactions. People interact electronically either through human-computer interfaces or through programs representing humans, so-called agents. In either case, no physical interactions among entities occur, and the systems are much more susceptible to fraud and deception. Traditional methods to avoid cheating involve cryptographic schemes and trusted third parties that overlook every transaction. Such systems are very costly, introduce potential bottlenecks, and may be difficult to deploy due to the complexity and heterogeneity of most online environments: e.g., agents in different geographical locations may be subject to different legislation, or different interaction protocols. Reputation mechanisms offer a novel and effective way of ensuring the necessary level of trust which is essential to the functioning of any market. They collect information about the history (i.e., past transactions) of market participants and make public their reputation. Prospective partners guide their decisions by considering reputation information, and thus make more informative choices. Online reputation mechanisms enjoy huge success. They are implemented by most e-commerce sites available today, and are seriously taken into consideration by human users: numerous empirical studies emphasize the existence of reputation premiums - providers with higher reputation can charge higher prices. Nonetheless, the formal investigation of reputation mechanisms is still a young research area. Incentive-Compatible Signaling Reputation MechanismsThe main function of signaling Reputation Mechanisms is to approximate as well as possible the fixed, but unknown characteristics of a product or service that is repeatedly consumed by a group of users. Reputation information is computed by iteratively integrating individual feedback as prescribed by the comprehensive literature on the theory of learning. However, the effectiveness of such mechanisms is conditioned on obtaining honest feedback. Notorious examples regarding eBay or Amazon feedback have made it clear that users do not always find it in their best interest to report the truth. Within this line of research, we:
Selected papers: R. Jurca and B. Faltings. Collusion Resistant, Incentive Compatible Feedback Payments. Proceedings of the ACM Conference on E-Commerce (EC'07), pp. 200-209, San Diego, June 11-15 2007. [BibTeX Entry] R. Jurca and B. Faltings. Minimum Payments that Reward Honest Reputation Feedback. Proceedings of the ACM Conference on Electronic Commerce (EC2006), pp. 190-199, Ann Arbor, Michigan, June 11-15 2006. [BibTeX Entry] R. Jurca and B. Faltings. Using CHI-Scores to Reward Honest Feedback from Repeated Interactions. Proceedings of AAMAS06, pp. 1233-1240, Hakodate, Japan, May 8-12, 2006. [BibTeX Entry] R. Jurca and B. Faltings. Enforcing Truthful Strategies in Incentive Compatible Reputation Mechanisms. Internet and Network Economics (WINE 2005), Lecture Notes in Computer Science, Volume 3828, pp. 268-277, 2005. [BibTeX Entry] Incentive-Compatible Sanctioning Reputation MechanismsSanctioning reputation mechanisms, on the other hand, are mainly used to encourage cooperative behavior in environments with moral hazard. Providers are equally capable to deliver good service, but doing so requires costly effort. The role of the reputation mechanism is to expose malicious providers, and label them with a bad reputation. When the loss incurred by not cheating in the present is offset by the expected gains due to future transactions in which the agent has a higher reputation, cooperation becomes a stable equilibrium. For this class of mechanisms, honest reporting can be motivated by the repeated presence of the client in the market. We describe a simple mechanism where the feedback reported by the client is confronted against a self-report made by the provider. We show that there is an equilibrium where all transactions (and reports) are honest, and give upper bounds on the amount of false information recorded by the reputation mechanism in any other equilibrium. Selected papers: R. Jurca and B. Faltings. Obtaining Reliable Feedback for Sanctioning Reputation Mechanisms. Journal of Artificial Intelligence Research (JAIR), volume 29, pp. 391-419, 2007. [BibTeX Entry] R. Jurca and B. Faltings. Truthful Reputation Information in Electronic Markets without Independent Verification. Technical Report No. ID: IC/2004/08, Swiss Federal Institute of Technology (EPFL), http://ic2.epfl.ch/publications, 2004. [BibTeX Entry] R. Jurca and B. Faltings. "CONFESS". An Incentive Compatible Reputation Mechanism for the Online Hotel Booking Industry.. Proceedings of the IEEE Conference on E-Commerce, pp. 205-212, San Diego, CA, USA, 2004. [BibTeX Entry] Novel Applications of Reputation MechanismsOne promising application area for reputation mechanisms is to monitor Quality of Service (QoS) parameters in markets of web services. Service-level agreements (SLAs) establish a contract between service providers and clients concerning QoS parameters. Without proper penalties, service providers have strong incentives to deviate from the advertised QoS, causing losses to the clients. Reliable QoS monitoring (and proper penalties computed on the basis of delivered QoS) are therefore essential for the trustworthiness of a service-oriented environment. Instead of traditional monitoring techniques, we use quality ratings from the clients to estimate the delivered QoS. A reputation mechanism collects the ratings and computes the actual quality delivered to the clients. The mechanism provides incentives for the clients to report honestly, and pays special attention to minimizing cost and overhead. Selected papers: R. Jurca, W. Binder and B. Faltings. Reliable QoS Monitoring Based on Client Feedback. Proceedings of the 16th International World Wide Web Conference (WWW07), pp. 1003-1011, Banff, Canada, May 8-12 2007. [BibTeX Entry] R. Jurca and B. Faltings. Reputation-based Service Level Agreements for Web Services. Service Oriented Computing (ICSOC 2005), Lecture Notes in Computer Science, Volume 3826, pp. 396-409, 2005. [BibTeX Entry] Understanding Real Feedback ForumsRecent analysis raises important questions regarding the ability of existing feedback forums to reflect the real quality of a product. In the absence of clear incentives, users with a moderate outlook will not bother to voice their opinions, which leads to an unrepresentative sample of reviews. For example, Amazon ratings of books or CDs follow with great probability bi-modal, U-shaped distributions where most of the ratings are either very good, or very bad. Controlled experiments, on the other hand, reveal opinions on the same items that are normally distributed. Under these circumstances, using the arithmetic mean to predict quality (as most forums actually do) gives the typical user an estimator with high variance that is often false. Improving the way we aggregate the information available from online reviews requires a deep understanding of the underlying factors that bias the rating behavior of users.Selected papers: A. Talwar, R. Jurca and B. Faltings. Understanding User Behavior in Online Feedback Reporting. Proceedings of the ACM Conference on E-Commerce (EC'07), pp. 134-142, San Diego, June 11-15 2007. [BibTeX Entry] Projects and collaborations:Computational Reputation Mechanisms for Enabling Peer-To-Peer Commerce in Decentralized Networks. Data, Information, and Process Integration with Semantic Web Services (DIP) Last modified: Feb 15, 2008 |