Much of the new “gig economy” relies on reputation systems to reduce problems of asymmetric information. There is evidence that one aspect of these reputation systems, online reviews, provide information to these markets. However, less is known about how these reviews interact and compare with other pieces of information in these markets. This paper provides a more complete picture of the reputation system in an online labor market. I compare the informational content of online reviews with other sources of information about worker ability, including the review comments, standardized exam scores, and the worker’s country. I estimate the effect of each component on wages and worker attrition. Reviews have a relatively small effect on both wages and attrition, however, I am able to separate out the dual role of reviews: rewarding good workers and punishing bad ones. Finally, I investigate why firms leave reviews at all, and find that firm reputation and re-hiring considerations incentivize firms to leave informative reviews.
The ability to estimate peer effects in network models has been advanced considerably by the IV model of Bramoullé et al. (2009). While such IV estimates work well for very sparse networks, they exhibit very weak power for networks of even modest densities. We review and extend the findings of Bramoullé et al. (2009) and then propose an alternative estimator. We show that our new estimator works approximately as well as IV in very sparse networks and performs much better in networks of moderate density.
Download rates of academic journals have joined citation rates as commonly-used indicators of the value of journal subscriptions. While citation rates reflect worldwide influence, the value that a single library places on access to a journal is probably more accurately measured by the rate at which it is downloaded by local users. If local download rates accurately measure local usage, there is a strong case for employing download rates to compare the cost-effectiveness of journals. We examine download data for more than five thousand journals subscribed to by the ten universities in the University of California system. We find that controlling for measured journal characteristics - citation rates, number of articles, and year of download - download rates, as captured by the ratio of downloads to citations, differs substantially between academic disciplines. This suggests that discipline specific adjustments to download rates are needed to construct a reliable tool for estimating local usage. Even after adding academic disciplines to the variables we control for, we find that there remain substantial ``publisher effects'', with some publishers recording significantly more downloads than would be predicted by the characteristics of their journals. While the usage tool can be modified to incorporate the publisher effect, this raises the question of what causes such substantial differences across publishers once journal and discipline characteristics are accounted for.