Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Peer quality review #32

Open
mbernst opened this issue Dec 22, 2017 · 2 comments
Open

Peer quality review #32

mbernst opened this issue Dec 22, 2017 · 2 comments

Comments

@mbernst
Copy link

mbernst commented Dec 22, 2017

Problem

Requesters focus a lot on result quality, but often want to be hands-off about it --- often willing to pay more if they can worry less. For their part, workers are risk averse around rejection (see McInnis's paper "Taking a HIT: Designing around Rejection, Mistrust, Risk, and Workers' Experiences in Amazon Mechanical Turk").

Proposal

One approach that has been explored in a few academic circles (e.g., Find-Fix-Verify, Argonaut, Shepherd, Crowd Guilds), but is something that has to be implemented manually by the requester, is quality review. Often submissions are reviewed by peer workers, or known high-quality workers, before being returned to the requester.

Our proposal would be to investigate whether peer review substantially improves quality for work on the platform, and if so, make it easily accessible through a single button when the requester launches the task. We explored this a couple summers ago in the context of DaemoScript, when the programmer has no explicit relevance feedback signal to provide.

So, specifically, we would explore whether it would be a good idea to allow requesters to check a box to pay a little more money in return for having a high quality worker review results before they're returned to the requester. Specific steps would likely include:

  • Test whether peer review helps improve results for the kinds of tasks on Daemo
  • Figure out who should be peer reviewers (e.g., workers who have higher Boomerang scores than the person who did the work?)
  • Figure out how to price peer review accurately
  • If all looks good, implement it on the platform through as simple a user interface as possible (one-click)

Implications

Short-term: this could help us demonstrate quality improvements, and make workers less concerned about immediate rejection because other workers will be vetting first.

Long-term: I am hopeful that the long-term impact of this would be that it increases requesters' trust in the quality of the platform, and increases workers' willingness to take risks and take on more work.

Contact

@michael Bernstein on Slack


To officially join in, add yourself as an assignee to the proposal. To break consensus, comment using this template. To find out more about this process, read the how-to.

@mbernst mbernst added this to the Strategy milestone Dec 22, 2017
@mbernst mbernst self-assigned this Dec 22, 2017
@markwhiting markwhiting self-assigned this Dec 22, 2017
@neilthemathguy neilthemathguy self-assigned this Dec 23, 2017
@iceLearn iceLearn self-assigned this Dec 24, 2017
@iceLearn
Copy link
Member

This is a very interesting proposal.

Our proposal would be to investigate whether peer review substantially improves quality for work on the platform,

this kind of guilds model we talked, but here I assume since we dont have levels of workers, let any worker who is interested to review their colleagues work.

if so, make it easily accessible through a single button when the requester launches the task.

I visualize a check box than a button saying
[ ] I'm willing to pay x$ to get assured quality work

  • few questions to think and if thats something you already figured out-
    who determines the extra amounts ( we as the platform propose or requestor propose the amount)
    How do we determine who is going to review which workers work? Is it random assigning or do we follow kind of algorithm where it determines good quality reviewers
    How do we know peers did a quality review - can we incetivice for good quality review which later determines quality reviewers ranks.

@mbernst
Copy link
Author

mbernst commented Dec 24, 2017

@iceLearn

this kind of guilds model we talked, but here I assume since we dont have levels of workers, let any worker who is interested to review their colleagues work.

One option would be to use Boomerang scores --- higher Boomerang score would be the equivalent of a higher level guild member. I'm open to alternatives, just an initial thought.

who determines the extra amounts ( we as the platform propose or requestor propose the amount)

I think this is something that we will have to figure out via the strategic direction. One option would be to escrow more money than we expect would be needed, then return unused amounts to the requester's account. Another would be to let the requester price the review task.

How do we determine who is going to review which workers work? Is it random assigning or do we follow kind of algorithm where it determines good quality reviewers

We can draw on previous lit here, like Argonaut or crowd guilds, which suggests allowing review by workers with greater reputation. It could post a review task back to Daemo automatically, but restrict who can do it.

How do we know peers did a quality review - can we incetivice for good quality review which later determines quality reviewers ranks.

Great question, another thing we'd need to work out. For example, if a requester dislikes the results, should it be blamed on the reviewer or the originator of the work?

I think what I'm saying here is --- great questions, and they are decisions we'd need to make if/when we pursue this strategic direction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants