Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nomination: populate quality review group for User research #43

Open
iceLearn opened this issue Dec 25, 2017 · 28 comments
Open

Nomination: populate quality review group for User research #43

iceLearn opened this issue Dec 25, 2017 · 28 comments
Assignees

Comments

@iceLearn
Copy link
Member

Problem

Crowd Research Collective is full of ideas and designs which required user experience testing, user experience design and user research to understand better of new ideas. Most of the ideas will need to review in that aspects and currently User research group in quality review is empty.

My Proposal

I would like to nominate my self and Angela ( currently not in Github but a member of our collective) in populating this empty group.
At the same time I invite any individual who might be interested in this self-nominate or nominate others appropriately.

Nomination paragraphs are follows ;

@iceLearn has led many user research in the past for this project ( some examples Link 0 Link 1 Link 2 Link 3 and very much interested in the field. Her ongoing PhD is intercepting with HCI and has contributed towards many other projects in this area.

@Angela has done many MOOCs in HCI related courses and has contributed collaborating with me in this field in many instances though out this project. She has the interest in the subject and I believe she will be a good candidate for the group.


Use comments to share your response or use emoji 👍 to show your support. To officially join in, add yourself as an assignee to the proposal. To break consensus, comment using this template. To find out more about this process, read the how-to.

@iceLearn iceLearn self-assigned this Dec 25, 2017
@iceLearn iceLearn changed the title Nomination: populate Design quality review group for User research Nomination: populate quality review group for User research Dec 30, 2017
@qwertyone
Copy link
Contributor

qwertyone commented Dec 30, 2017

BREAKING CONSENSUS: I have to ask -- the design quality group seems to handle many functions and it should. However the following question is why I need to break consensus. Is it necessary to topically subset the quality review group this early when the actual work demands are unknown? Would it be preferential to let the work demands split the group?

I anticipate that the separation of the quality design group by topic without the knowledge of the actual work demand this early might create a specialization that will bias action and decision making towards that given topic without proper merit -- UX, IxD, Survey Design, Information Architecture are all potentially alternative groups which might arise. As this is the potential case, I would propose that this division of labor is postponed until at which some time when the work demands require the benefits of specialization. The diversion of resources and responsibilities being taken from the main Quality Design Review group too early might compromise and complicate the real needs of the group. The specialization might be better merited when 5 or more projects arise concurrently whereby the specialist domain knowledge might serve to speed the quality review process. To address the populating of members into the groups, I would encourage that the nominations here stand but instead fall under the design quality review team.

@markwhiting
Copy link
Member

@qwertyone is a group that was among those voted on as part of our governance, so this is not being introduced by @iceLearn here. You can see which groups we have here.

Also, nominations are not proposals in the normal sense, so I think we've basically agreed that they can not have consensus broken on them (I know that aspect is not what you were braking on per se)

I do agree with your point, we shouldn't make groups we don't need, and perhaps there's an opportunity to update our policy to reflect that somehow. I think if you wanted to look into that, a separate proposal would probably be the best option.

@qwertyone
Copy link
Contributor

I read that document before I wrote this and after I did not see the topical groups formally identified within a specific proposal that clarified their mission, members, duties and expectations.

@markwhiting
Copy link
Member

Yeah, I think thats something to be built out through a proposal, or perhaps something that we might argue should not be built out, but instead simplified to fewer groups (which could also be a proposal).

I think, if I remember correctly, the groups are loosely based on groups we had previously had.

@iceLearn
Copy link
Member Author

iceLearn commented Jan 2, 2018

@qwertyone hope your questions were clarified, I do understand your point - demand for groups based on the projects, however, I belive as we discussued thoughtout the governance process, it will be more benifical to the group of having already defined team to handle some of the work which has clear objectives. Such as for user research stuff, design stuff, operational stuff etc..

@qwertyone
Copy link
Contributor

@iceLearn I agree with having a group readily identified within the larger groups to adapt with demand.

@iceLearn
Copy link
Member Author

iceLearn commented Jan 6, 2018

I will remove the label question in this, following up the meeting https://www.youtube.com/watch?v=ILc0EBkiKzg&feature=youtu.be , I belive the proposal has consesnus to go to a vote.

Therefore will add the label voting and add names to vote.

@iceLearn
Copy link
Member Author

iceLearn commented Jan 6, 2018

Vote @iceLearn (Dilrukshi)

This vote is for @iceLearn (dilrukshi) to be added to the User Research QR team.

Vote for by using thumbs up on this comment. Vote against by using thumbs down on this comment

@iceLearn
Copy link
Member Author

iceLearn commented Jan 6, 2018

Vote Angela Richmond-Fuller (@arichmondfuller)

This vote is for @iceLearn (dilrukshi) to be added to the User Research QR team.

Vote for by using thumbs up on this comment. Vote against by using thumbs down on this comment

@iceLearn iceLearn added voting and removed question labels Jan 6, 2018
@neilthemathguy
Copy link

neilthemathguy commented Jan 6, 2018

I’m little confused here. Could you explain what it means by the quality review of UX?

@markwhiting said

Also, nominations are not proposals in the normal sense, so I think we've basically agreed that they can not have consensus broken on them (I know that aspect is not what you were braking on per se)

It is a proposal so it can have the BREAKING CONSENSUS.

@qwertyone
Copy link
Contributor

qwertyone commented Jan 6, 2018

I was thinking that a purpose of these nominees might be to review proposals of the group to review the methods proposed for UX related efforts.
The other things that they might be able to do are:

  • mentor selected efforts
  • to prioritize based from the collective's priorities
  • discussions with the larger quality review team for alignment of proposal directions

These are some directions I think they might head... What are your thoughts @iceLearn ? If I am misunderstanding the intent of the proposal, please elaborate.

@iceLearn
Copy link
Member Author

iceLearn commented Jan 7, 2018

@neilthemathguy refering to your question " Could you explain what it means by the quality review of UX?"
This is populate the groups as per operational group membership - https://docs.google.com/document/d/1465PAK2Q1zA-0juttTcbTuQOzABwNMY6yb-JXPSWGps/edit#

So name might confuse since the main group is called " quality review group" under that we have Quality review group for Design , Enginering etc.. User research is part of that quality review and this is to nominate them.

@qwertyone - my opion on how user research group will review or functions

  • If the prosoal is to create a new idea / feature / app user research group will be able to assist in doing a quantitative/ qualitative research feasibility and requirments
  • Conducting market research, or product forcus research on how feasibility or the expereince designed etc..
    Some of the idea can be refered in web and I provide this link as one example of such role - https://artisantalent.com/job-descriptions/user-researcher-job-description/

A good example I can think is, say if we create the mobile app for Daemo, User research group can help to review the features built are providing a delightful exprence with some stats or ethanography where the product can optimize on those feedback.

@qwertyone
Copy link
Contributor

Nomination
@anotherhuman Aaron has been formally trained in Quality Engineering and Cognitive Engineering, which involves a good background in statistics, HCI and business.

@neilthemathguy
Copy link

neilthemathguy commented Jan 7, 2018

Vote Aaron G (@qwertyone)

This vote is for @qwertyone (Aaron G) to be added to the User Research QR team.

Vote for by using thumbs up on this comment. Vote against by using thumbs down on this comment

@qwertyone
Copy link
Contributor

I am fine with that.

@neilthemathguy
Copy link

neilthemathguy commented Jan 10, 2018

At present, I don’t feel confident enough to vote for the above UX nominations for two reasons.

First, there are not enough results/activities to evaluate the UX work in the context of Daemo. We neither had enough users to conduct UX nor had sequence of studies beyond the ones listed above, in Slack, and during the weekly milestones. At present, I hardly see any recommendations that were made through UX and finally influenced or became the parts of Daemo’s UIs.

Second, I’m hoping to see technical depth in the future UX research activities. I would be happy to help on that front. If I consider the slides shared above and prior work on Slack/Wiki as data points for nomination voting, I don’t find answers to fundamental HE questions listed below:

  • What heuristics were violated? How and Why?
  • What factors contributed to the severity of a problem?
  • Who were the evaluators? Did they communicate with each other?
  • How many times they evaluated the interface; is there a full technical report of HE?
  • What was the mean of a set of severity ratings from the evaluators?
  • What recommendations came out of the UX exercises?

In the future, I would love to support the nominations (ones listed above and new) if there is enough data to evaluate the candidates. The nominations can happen after 4-6 months when more UX activities will arise and Crowd Researchers will evaluate Daemo's interfaces.

I appreciate your understanding of my perspective.

@qwertyone
Copy link
Contributor

qwertyone commented Jan 10, 2018 via email

@iceLearn
Copy link
Member Author

iceLearn commented Jan 10, 2018

@neilthemathguy

there are not enough results/activities to evaluate the UX work in Daemo project. We neither had enough users to conduct UX nor had sequence of studies beyond the ones listed above, in slack, and during the weekly milestones. At present, I hardly see any recommendations that were made through UX and finally influenced or became the parts of Daemo’s UIs.

I disagree on that because since this project started, there were many interfaces, interactions evaluated recommended. For example the initial task authoring was completely designed and evaluated and recommended by the nominated team ( Angela and my self ) along with Adam Ginzberg ( I dont know his githubID, @aginberg in slack) - Slide 9 10 11 in this deck is manily some of the prototypes - https://docs.google.com/presentation/d/1cLNFed0mr_eZ4w-BZ4aNK657PvG-Vm1NwwcbiW-ToJ4/edit#slide=id.g5b73afe61_2_31

At the same time, I lead to have two instances in heroku platform with my Sri Lankan team to did a usability test with real experience - ( if I can have some time I can show the slack messages as evidences)

Our first UIST poster 2015, the whole usability part was recommended by myself and I am surprised to see your comments as I submitted the compiled report to you where you included to the paper ( some part of it at least).

Task authoring page in Daemo has a direct change which I have initiated as - those @shirishgoyal and @dmorina did the changes I pointed out - slide 10-11 in this deck - https://docs.google.com/presentation/d/1gBd4IwzETLB8JAwAeYRXTtfCJvNjhh-o-ahJhCi4Gxw/edit#slide=id.gbdc411914_8_58

Please refer to this deck on the heuritic evaluations , observations even done web interviews and videos are linked in this - https://docs.google.com/presentation/d/1Xl943fwHw69S2zjvy70r-VdnYnnyD3hj0pokz4cVlGk/edit#slide=id.p

We have recommendations for Guilds which we did not really implemented but at least we did the initial mockups based on the user experence - https://docs.google.com/presentation/d/1VsNxbpDpwdV8kx9IVQ-0kq5zXJ_FYpKatDLxrJjvD-g/edit#slide=id.g11e71ddcb1_0_8

At present, I hardly see any recommendations that were made through UX and finally influenced or became the parts of Daemo’s UIs.
This project was started in 2015 and are you considering only present? There were many recommendations and even considered at as at this point and I am sad to see your underestimation and ignorance.

Your questions are mostly covered in the slides I shared in this comment.

Thanks for your perspective.

@neilthemathguy
Copy link

neilthemathguy commented Jan 10, 2018

Thanks @iceLearn! Yes, I'm aware of the work you have mentioned. At present, as I listed above, I don't have enough information to evaluate the nominations. I feel this may be due to the lack of user base we have--- we didn't do much of UX yet.

The questions I've asked don't have answers in the documents. If I benchmark the work against UX researchers I've known, there is still some gap. I've mentioned the reasons above and I'm happy to discuss in depth.

Note that my votes are very specific to the context of evaluation of this nomination proposal.

@shirishgoyal
Copy link

This is my personal take and should be taken in positive spirit. I think it might be helpful to draft nomination rules for different quality review groups first where boundaries are fuzzy and goals not clear.

  • Proposal ideas

If the proposal is to create a new idea / feature / app user research group will be able to assist in doing a quantitative/ qualitative research feasibility and requirements

As we collectively discussed in governance hangouts, this onus lies on the researcher/assignees who submit the proposal to develop a concrete plan with arguments supported by facts, experience or user studies.

  • Agreeing to some of the other comments so far by @qwerytone and @neilthemathguy, there is no past history of enough user research or experience studies submitted following scientific methods. What should be our minimum expectation for the quality of output we expect from people who have formal education in the field and who can nominate themselves for reviewing others.

  • My personal evaluation based on the job description shared in the nomination for all the links shared is below:

  1. Link 0: Seems somewhat to follow usability test plan. According to the slides:
    Users involved in the study: What was the selection criterion for the target audience?
    User 1 doesn’t fit criterion of valid target user group (already part of crowd research project)
    User 2 has never worked with any crowd-platform before
    User 3 male freelancer (why pick worker role to act like requester?)
    Users were asked “Benchmark - you should be able to do it less than 6 mints” and then timed for efficiency (perfect integral times ??)
    User data is not consistent - some data like gender only available for 1 user - shows no standard data capturing process/survey followed.
    All slides are trying to explain what was done but I think UX study is meant to answer concretely why something should be done
    Expert taking longer time - what is the reason for that?
    Where are all the videos or interviews conducted with all the users? I can only see one set of videos. Please share others if possible.
    What was the final evaluation or statistically significant recommendations from the responses (if they exist)?

  2. Link 1: Personal feedback on the different partial designs, cannot be called A/B study without clear user research plan and executed steps. Also no statistical measurement on how feedback was captured.

  3. Link 2: Shows meeting notes for the hangout rather than UX study/research

  4. Link 3: Appears to be personal feedback on the interface

  • Are there any other concrete examples which show the actual research process followed and what survey or UX technique was executed. Also user responses are missing from almost all of these examples which is the most critical component and no statistical data exists whatsoever to reach to a conclusion what works and what doesn’t work.

@iceLearn
Copy link
Member Author

iceLearn commented Jan 10, 2018

Thank you all of who voted on me @markwhiting @mbernst I highly appreciate your confidence and your willingness to give me an opportunity to work on the field which I am so passionate which I am strong in contributing more to this project.

2 years worth of continued interest and effort became effortless because of 6 “thumbs downs” which reflect that I am not suitable, i have not done enough work to Dameo, I have not done enough usability, I have not answered the “ fundamental questions” brought by neil.
What heuristics were violated? How and Why?
What factors contributed to the severity of a problem?
Who were the evaluators? Did they communicate with each other?
How many times they evaluated the interface; is there a full technical report of HE?
What was the mean of a set of severity ratings from the evaluators?
What recommendations came out of the UX exercises?

UX or widely known as user experience is commonly known as what/ how users feeling your product or service.
Although the questions raised mainly focused on Usability aspects, which is only a part of UX including many other parts, such as most of the ethnographic research, conducted, interviews, observations, experience design mockups, information displays, and many more which is part of UX and it is a field which is not well defined but evolving everyday.

So the real question is whether am I no good to represent UX team?

Based on the voting, according the constitution - yes disqualifies. Because with in last 30-40 mints of timestamp @neilthemathguy @Alipta, @AKSHANSHA47 @anasarhussain @sehgalvihore @shirishgoyal decided to downgrade or thumbs down my vote which signals that they really do not want myself being selected.

I can understand when someone not voting which signals that “I am OK the person being nominated although I strongly do not hold on the selection”. ( Which I exercised in Design quality reviews nominations ) but downgrading signals a strong opinion. .

At Least @neilthemathguy explained his rationale of what disqualifies me based on what he think as UX and referring to the fact that he learnt from Carnegie Mellon University , MIT the industry UXers he is known. I respect your thought, but I really want to express my feeling towards this.
I strongly believe and I know you have confidence in me, my capability and in fact we have worked together nearly 2 years in this project and worked through mutual understanding and respecting collective effort. (I am not trying to match to you or your effort, and I can assure I holds the opinion that you have worked and has contributed more than what I have done). Since this is about me, I want to say that there were occasions you confidently expressed that I can do UX part and worked towards to it. We both know that.

Unfortunately the time has brought many misunderstandings and power struggles. Ownerships of who’s and what. Which I am a victim than a cause.

I am stunned to see @shirishgoyals comment - my effort has been simply checked against link by link slide by slide. Such as -
User 1 doesn’t fit criterion of valid target user group (already part of crowd research project)
User 2 has never worked with any crowd-platform before
User 3 male freelancer (why pick worker role to act like requester?)

I could take 1 by 1 and go through and debate the rationale such as user 1 although the crowd researcher, has never experienced our designs before , user 2 - there is no rule in UX that we can not check usability with someone who do not have experience with the target application. In fact it is great to get the perspective from users who never been in crowd-platform because we target for new users as well.

Come on! This is not the point. If your rationale is based on that I feel this project is nothing genuine but utter power struggle.

I am more stunned to see the sudden downgraded thumbs downs by @Alipta, @AKSHANSHA47 @anasarhussain @sehgalvihore which explain no reason or has never been interacting since the day we establish the governance process.

I really do not have anything personal against you @Alipta, @AKSHANSHA47 @anasarhussain @sehgalvihore but the strugle I see is a question of being genuine and be ethical. Because if you have any concerns and questions should have been raised ( although it is not required in according to governance) but there are etiquettes.

In my personal view is , consciously or unconsciously it doesn't look your attempt is genuine.

But why… why.. Does it have to be like this, if I asked this 1 year ago I am sure everyone would vote differently, but somehow I dont know where exactly but from one point onwards the project has been looking like attacking pitchball station. Unfortunately outcome of this proposal been a victim of it so has few more.

I am truly sad about the situation and I have no idea how to make it better. I can refer to the days which we had eagerly took part in every effort to make this a great success.
I may not be from CMU, MIT or from top notch industry nor my country or university may known to the world, but I have been throughout this project continuously showing the great interested contributing as much as I can with 2 small kids doing a PhD, getting scholarships from Google, doing reviewing many CHI papers being in Organizing and reviewing HCI for Gracehopper GHC, interacting with many other great researchers around the world including you all.

But 2 years of building this became a fate of 6 downgrades.

My feeling is irrelevant to the governance, and I do respect the results but will it be same for the whole project? Where is our community spirit? Teamwork , social bond? Are we doing the right thing?

I know any of you can take word by word I said and define your own interpretations, but being the only woman for this project for past few months I feel my estrogen level disterbs and making me so emotional than any other day.

Anyway, I look forward to learn continuously with you, if possible to feel back to the way we felt when we started the project.

@shirishgoyal
Copy link

Apologies @iceLearn if my feedback has affected you negatively in any way as that wasn't my intention at all. Also it is not about whether you can participate in UX team or not but who can review other people's work. I am nobody to decide on your participation as it should be your personal decision and I have always argued to allow freedom to people to help them do what they are best at. Therefore it was important for me to illustrate where the gaps existed, ofcourse based on my exposure to different UX/UI professionals I have worked with.

The point of mentioning different users was to highlight every UX study is meant to start with definition of target audience and objective and it felt to me it was covering everybody. I find it odd that you just mentioned this out of the context and completely ignored other feedback. I would have given the same feedback on these links 1 year ago as well.

I cannot do injustice to my role as voter by not doing what I really believe in. I also apologize on behalf of everyone else who may have read my feedback and acted accordingly trusting my word or experience.

Despite all experience we may all gather, we are just humans and we will keep learning and growing and it will be our fallacy to think we have learnt it all. I personally believe constructive critique highlights the gaps and what should be fixed and it is all intended to improve the receiver. The same happens when we submit a research paper after all the hard work and then some reviewers may not find it compelling or worthy of inclusion yet. But that only means we have more work to do and needs self-reflection and learning, and critically looking at what we submitted to fill the gaps.

I have full faith on you that you will bounce back and contribute with same zeal to the community. I will be more than happy to nominate or upvote you for UX in future when there is visible order and quality in your submissions.

@iceLearn
Copy link
Member Author

iceLearn commented Jan 12, 2018

Thank you @shirishoyal, as much as you do, I want to believe your feedback was in positive intension.
But this whole exercise doesn’t appear to be in good intention.

First, @neilthemathguy questioned my ability and proof of what I have done to Daemo

“First, there are not enough results/activities to evaluate the UX work in the context of Daemo. We neither had enough users to conduct UX nor had sequence of studies beyond the ones listed above, in Slack, and during the weekly milestones. At present, I hardly see any recommendations that were made through UX and finally influenced or became the parts of Daemo’s UIs.”

And the whole questions were based on Heuristic Evaluations (HE) focusing on Usability.

Since the day I join this project, we were ideating, prototyping and testing the whole process. User experience is not just usability testing. There were milestones that I DRIed the User research (Milestone 4 – and we created UX test plans together while DRIing the effort –
http://crowdresearch.stanford.edu/w/index.php?title=UX_Plan_for_Deamo#2_._User_Testing_for_Daemo,
Before the governance process, (Rajan Vaish) @rvaish who was mainly coordinating the research group selected me to DRI it based on the performance I did do the project and also my background (I can provide slack messages, invitation emails but do I have to? )
At the same time, I collectively DRIed many including user studies to add to our research papers.
image

As I said Task authoring porotypes initial design were based on User experiences and the basic ethnographic studies we did,
The Daemo Task availability email template is direct implication of a recommendation I provided on usability perspective which implemented by @durim
If you search the slack history you will find enough evidence.

But the question is, neither @neilthemathguy or @shirishgoyal concerned to ask similar in Design Quality review team #16?
Did you ask “what designs you provided and how many adopted to Dameo? What theories and practices you used when you design?” They were not evaluated against any of the submissions they did in the project.
I never questioned because I have been in this project, I knew and I trusts on what they have done.

On the other hand, in the voting, I voted to the nominations which I believed had done work in that arm, I never downgraded anyone because I have good faith in the ones that I didn’t vote that given the opportunity, they have ability to exercise Design related matters.

In my case, most voters downgraded to my nominations never appeared in Design Quality Review.
I understand as @shirishgoyal mentioned, there are odd reviewers who do not get the work, likewise it is understandable some people to have different evaluation methods in doing their own interpretation of justice.
In this case, I believe this is a deliberate attack, in such a short timestamp so many voters appeared to downgrade this attempt and specially they were never actively participating during any of the conversations since the governance process.

I can agree to downgrade if I have done something really problematic to the user experience process or any of my submissions/recommendations caused problems or something.

The whole point is that as a project, we worked in good faith, we believed in each other, we thrived each other, one of the goal in this project was to elevate and give opportunities to students/researchers and build a community who support each other while working towards a common objective.

Until this vote I felt the same, but all of the sudden what I believed, what I have built with you, the efforts I have put to this project working in good faith with you all with such a respect became a humiliation because of 6 downgrades.

What are we reflecting to anyone who is going to join this project – even if they work hard for more than 2 years, with continues efforts ultimately it will be questioned in this manner and you have to collect proofs to show your capabilities. But if you can find some members who can appear on time only for voting, you are better off in this community.

Where is the trust? Where is the good faith? I never thought this would happen to me, I always had good faith in all of you and I still want to believe the same, but whole thing happened and I was treated/evaluated differently than the rest of the community in a bad way.

Ultimately 6 downgrades decided quality of my work.

Believe it or not I am in shock, agony …. I am thinking. all these 2+ years what have I done wrong to deserve this?

@neilthemathguy
Copy link

neilthemathguy commented Jan 13, 2018

Let’s focus on the fundamentals of the governance process.

The community voted for the following options:

  1. Sierra Creation of a central leadership
  2. Aspen Creation of DRI council
  3. Cascade No centralized decision making body listed in the option 1 and 2.

Out of these options the community selected Cascade i.e. community oriented decision making with no centralized control or leadership of any sort. We collectively agreed to follow that.

I’m not sure why these groups are considered as a some sort of RECOGNITION or POWER. Let me be very clear that they are NOT. Being a part of these groups doesn’t indicate any recognition or give any power. I strongly believe in leadership as an action NOT as a seat of power or designation. The quality review groups are not congruent to any sort of leadership. This is a responsibility to make sure that we as a community do high class work and meet the promises that we have been making to the rest of the world. We should adhere to high standards and inspire rest of the world through staying committed to our original purpose and actions, rather than cheap talks and self incentives. Every now and then few people are putting their self incentives above the community’s goals and welfare. What do the members want— the power with few, OR money, OR the success of the community?

Let’s take a pause and think together--- where these review teams are really needed? I strongly believe that any mission critical activity such as pushing code and deployments to the production system should go through multiple reviews by the community members and selected few who can take immediate steps to bring back the broken system. At the same time, these selected few should NOT think that they have some exclusive power. They do NOT, they are at the same level as rest of the community and cannot hijack the codebase, delete people’s contributions/work, or prevent other community members from contributing, as it has frequently happened in the past.

We have established the governance and proposal process; its goal is to give every member respect and equal voice. A member of the community has a fundamental right to be a reviewer— anyone can review or give comments on artifacts, code, or proposals on how to improve their quality. Different perspectives or critics are NOT threats. These critics will help all of us to improve and push ourselves towards our goals. Instead of trashing the feedback, I would greatly encourage people to appreciate others’ perspectives and critics. That’s what the open community, design, research, and collective process is all about. If the community doesn’t bring different perspectives together, how will it grow?

@iceLearn I’m aware of your contributions to Daemo. I, on the behalf of the community, have always valued and appreciated the efforts; this has reflected in various recognition criteria we have used in the past. Everyone’s efforts must be recognized and I’ve been tirelessly working to make that happen. I think many of your comments regarding the contributions have already been captured under #21 proposal. As stated, my stance above is strictly based on the artifacts being shared in the nomination. The quality review process has nothing do with the status or recognition.

I strongly believe that calling out the members for their voting opinion, limits and censors them from freely expressing their view and hurts democratic governance procedures. I and many other volunteers have worked very hard to make this governance process happen and give equal voice to every community member from the past and present. The recent events have also affected mine and other community members’ lives. As this is a volunteering project, the members have freedom to vote at any time during the voting period; nobody can take this right away for their own self interests. In this situation, there were 10 votes were casted by the community--- four of them in the favor and six were not. It is unfair when questions are just raised on the opinions that were not in favor.

I hope as a part of the community, everyone will start respecting the democratic process, different point of views, the voting process and its results. Otherwise, we are just heading towards wasting the entire community's efforts since last 3 years.

@iceLearn
Copy link
Member Author

@neilthemathguy, focusing on the fundamental of Governance process, yes majority of the community agreed on Cascade i.e. community oriented decision making with no centralized control or leadership of any sort, Out of ~1000 members 2 voted for Sierra, 7 for Aspen and 12 for Cascade and by majority rule Cascade is our process and I totally agree, there is no disagreement to that at all.

Let me phrase my understanding of the Quality review groups – these groups in the Crowdresearch collective will help to make sure the output of submitted proposals are in quality in terms of what they proposed and help to improve it so, we produce better outcome than just outcome of a proposal which ultimately, reflect high class work and meet the promises.

Towards this goal, I proposed nominations, including me, seeking opportunity to be in that user experience quality review group so I can help to the community with the expertise I have which I am confident in contributing towards.

I never meant or no in my understanding that the quality review groups represent POWER or RECOGNITION. It is not in my intention to see it as a leadership position and I am glad that we are inline as I also believe in leadership as an action NOT as a seat of power or designation and this quality review group is fully responsible to reflect quality output from Daemo.

Help me to understand, who and what are you referring as cheap talks and self-incentives? Is it something that I said towards explaining my feelings in this thread? Sorry, I don't understand by what do you mean Cheap talks?

I agree that every now and then some people are putting their self incentives above any community’s goals and welfare, it is understandable in any project and so as this. But I am not referring to anyone else I am referring to me, in my feeling, because I have been in this community since the day I joined continuously and I will do the same for future.

Yet, it is a good question - What do the members want?

By members I mean ~1000 members signed up to this Crowdresearch in the slack during spring, summer and winter in 2015. I do not know about others, but If you ask to question directly at me – I joined to this member community to learn and interact with worldwide researchers and work towards building crowdsourcing marketplace.

And as much as you believe, I believe the review teams have responsibilities and it is not a leadership position. They are same as other crowdresearchers in this community having extra responsibilities. So I believe the selected few including you will think in that perspective. The activities happened in the past such as hijacking code etc.. are none that I did or not in my intentions to encourage any of those.

I totally agree that every crowdresearcher should review and has right to review and different perspectives are not threats at all. And feedback should not be trashed rather is the learning process.

Let me again bring the 2 points which concerns me as a volunteer who has contributed to this project thought out 2+ years and specially someone who has been spending extra time on meeting, discussing, commenting and working towards governess process and the good will of the project. ( of course I am not the only one)

Point 1 -
I proposed my name to including one I believe could do a better job in User experience to Daemo based on the confident and the expertise I have. And I repeat, this was no intention of thinking as a leadership position but the willingness to service and spend time on those extra responsibilities as a volunteer (means pushing my effort without expecting anything as return as a bond)

How was I confident to nominate my name – because in the past I have done many user experience and to this project, I DRIed User research activities and the research area in the PhD that Im reading is in that arm of (DBR) Design Based Research, HCI and Openlearning.
Because of those I was recognized in collective research papers, and in the process we had before governess - reaching by PI’s inviting to lead efforts based on the great contributions to this project towards demonstrated skills including User Experience.
I believe proposal #21 has nothing to do here, it is about how, where and what to be display in public recognition page. In this context, what I meant was among crowd research members I was recognised in this project by author rank in papers towards work, DRI invitations based on the performance and the contribution towards UX.

But, you are saying that you evaluated me based on “only” the artifacts I provided in the nominations. So as @shirishgoyal did.

My understanding was that you knew my capabilities, you had confidence in me on the fact that I can do user experience, that is why I reminded that we did work together to know this and there were occasions I was DRIing voluntarily and was selected to DRI the effort before governess process.

We as a whole team never had expected quality level ground truths before, but we worked providing feedback to each other’s submission during milestones before governance.
Had I got a feedback from any of you saying these are not the expected user research or UX etc, during my submission over the course of 2+ years I would have tried to meet the expectations.

So as for @shirishgoyal, although I don’t have particular instances of we both working together like I have done with @neilthemathguy, but we always closely worked in the project, I thought you will recognize my efforts towards user research and thereby understand that I have qualities to be able to review user experience to Daemo and help proposals to improve their outcome if there is a mismatch.

My understanding was that rest of the crowdresearchers will evaluate me based on the efforts I have done towards that directions and the capabilities I have shown towards past 2+ years.

I thought I do not have to mention one by one artifacts, instances slack history and references so I can stressfully point “look these are the things I have been doing for past 2+ years and therefore evaluated based on that” and I had good faith in the community that since I have done work that they will consider me based on my work for the whole 2+ years , and that was the understanding for the Design Quality review evaluation since nominations did not carry any artifacts.

I don’t know either of you (@neilthemathgiy or @shirishgoyal) evaluate me differently if I follow the format of the Design Quality Review nomination by just stating my position statement.
I am stating this because you said my evaluation is based on the submitted artifacts.

Anyway, any of you can have your own evaluation method as this time. It is my misunderstanding, I thought the community will think from the perspective of the 2+ years contributions will take into account. It is my misunderstanding, I thought the recognized contributions in the #21 met quality expectations.

I look forward to working on proposals to have a better mechanism of how do we evaluate, guidelines etc.

Point 2 –
As I said, during the voting, in such a short timestamp there were many thumbsdown appeared by those who never been engaging in the community since the governance process. It appeared nothing but a deliberate, planned well communicated attack.

But Yes I agree the governess process recognize any member who is active or inactive and they can come and only vote. Yes, this is democracy. My understanding was that everyone will work in good faith in the governess process. By good faith I mean that we all engage with each other honestly and fairly. Since I did not feel it was not fair and honest, I called for opinions from voters.

This recent event affected me a lot, I simply could not do anything else but think about this day and night. So, I can empathize how might have it hurt you @neilthemathguy in the past/recent events you stated and for any member in the future.

As much as you believe calling out members’ opinion may hurt democratic governance procedure, I believe we have to listen, understand the concerns and should be addressed in the governance process because the governance process we exercise is not proven to be fair and equal distributed towards populating fair, honesty and good will.

So, I shall rest the thread –

On point 1 – It is not end of the world that I was not considered to be producing quality submissions and thereby not elected to represent a quality review group for UX. As much as I am happy to see there are members who believed and consider I am qualified for it, I am happy that I still can contribute with proposals and provide feedback during consensus process and I look forward to it.

On Point 2 – I do not intend to abort the exercised effort in this nomination nor take anyone’s right’s away. However, conduct and voting in this nomination -#43 compared to Design Quality review nomination - #16 has concerns.
We shall work together in fixing these concerns in good will to the project in upcoming proposals.

To me, Crowdresearch collective is like family, we fight, we raise concerns, we question conducts and we argue but we all are one family who will care for each other while thriving each others success. I am
sure those who feel will feel it for working more than 2+ years.

@mbernst
Copy link

mbernst commented Jan 16, 2018

While I abide by the decision, I am dismayed by the results of this vote. We have spent weeks and months emphasizing that a core value of the Collective is respect and recognition of our members. These quality review groups represent recognition, responsibility, and growth that new members can aspire to. And yet after all this emphasis on recognition and respect, two thirds of the community votes to deny recognition to those who have spent years contributing quality work. This denial casts a poisonous miasma over the claim that we respect and recognize contributors. I hope the community will develop new strategies to become more developmental and supportive.

@qwertyone
Copy link
Contributor

qwertyone commented Jan 16, 2018 via email

@neilthemathguy
Copy link

neilthemathguy commented Jan 17, 2018

I highly encourage people to keep emotions aside and introspect.

How can Daemo provide voice to the workers and requesters when it cannot even understand different perspectives of its own community members?

Quality Review Groups

The quality review groups do not represent any sort of recognition or leadership. We have not agreed on this notion of recognition as a community.

Governance Process

We have collectively agreed to follow the Cascade i.e. community oriented decision making with no centralized control or leadership of any sort. Why are we creating the structures of hierarchies? Also, people keep questioning or bullying the voting if it doesn’t go in their favor. And we think we have built democratic system?

Recognition

Recognition should follow a clear visible metric defined at the particular time. This has been taken care of in #21 and will also cater to new cohorts in similar manner. We should follow common and agreed standards of recognition for everyone.These standards will evolve.

Respectful Environment

The community has seen how handful of few have abused the power in the past and bullying others in the hangouts. I’m trying to understand how will this lead to respectful environment? I never saw it was corrected.

Bias to Action

In the Code Review groups, it is almost a month now, volunteers still don’t have access and privileges. How is this bias to action? Isn’t this denial casting a poisonous miasma?

Selection Bias

I wonder why didn’t people nominated many others who deserve to be in these groups. Didn’t other people contribute? Don’t they have required credentials? I’m disappointed to see this biased behavior that denies opportunities to the deserving people who can make Daemo awesome.

Value of different perspectives and diverse opinions

In a democratic community, a member's participation in the voting process is a big activity; it is the sign of responsible members who care for the community. Similar to the higher turnouts in democratic voting makes a democracy more representative, it is essential for sustainability of this community to hear different voices and perspectives.

As Martin Luther King Jr. said “The ultimate measure of a man is not where he stands in moments of comfort and convenience, but where he stands at times of challenge and controversy.” I hope we can draw some inspiration here and help rebuild this community.

Daemo Volunteers

The volunteering members who are the contributors to Daemo have the right to engage in the community in whatever capacity they choose to. No one can and should censor, or threaten them. Although ~1000 people signed up for the project, in aggregate ~80-~90 were listed as co-authors on the papers published so far. For the voting, we collectively discussed and agreed that it is fair to give every co-author the voting rights. If you recall the email was sent out regarding the same.

Attrition

We have come a long way since the WIRED fiasco. I was hopeful that people will understand the perspectives and come together to create a positive environment for everyone, not just for few. In this hope, we spent 3-4 months collectively establishing the governance process. However, the disrespect towards the volunteers and democratic process shows why many people are active, follow threads, but don’t participate in the project.

What’s next?

Peace.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants