Preparing rating XBlock for production -- adding tests and testability#10747
Preparing rating XBlock for production -- adding tests and testability#10747
Conversation
0da782f to
4f19f60
Compare
|
@cahrens @explorerleslie I made a PR to move the Rating XBlock to production. Dropthought -- who we were cloning -- just went out of business, and I thought it'd be nice to give courses a migration path. It has different product goals from DoneXBlock, but it's actually very similar in function, structure, and tests, so it may save time and effort to review the two together. Original review was here: The core change is the addition of tests. A major limitation is that the only way to get data out is through data dumps. This is a little fundamental for the qualitative feedback; future iterations will likely improve this for quantitative. The UX could be better; we based it on how the LTI version of Dropthought looks, but now that Dropthought is no longer around, we could do better. The use of Unicode for faces has upsides and downsides. I think all of these would be good to explore, but not in the MVP. |
4f19f60 to
9f3f621
Compare
|
@pmitros thanks! I agree, I think this would be very valuable for our course teams. However, before wider rollout I think that both data output (possibly through a data download on the reports tab, or through the Xblock itself as a link that only staff sees to download data) and UX improvements are required for MVP (replace icons, general improvements to match existing edX patterns). I'm happy to help with this effort in whatever way you see fit -- let me know what support you need. |
|
@explorerleslie If there is capacity:
|
|
@pmitros great, thanks! I know both Christina and Frances will want to see a sandbox -- could you please set one up? @cahrens can you please create a ticket for this review -- scope is to review the tests and get it merged, but still not fully supported for edx.org, so no doc or anything. @frrrances could you please create a UX ticket for reviewing this Xblock. The desired outcome is to give Piotr feedback on the ticket or on a PR on how to improve the existing UX, secondary is any FED feedback which would be great to get but lower priority. In particular, I hate the icons (could we use something in Font Awesome?) and the general UX doesn't match existing edX patterns. |
|
@cahrens While we're not bringing this to fully supported yet, you may consider giving feedback as to what would be needed to get this to production-quality. My experience has been that this adds little to initial review time (since you're reading the code either way), especially for a block this simple, but saves a ton of time in the final push to prod (where we just need to confirm the issues identified were addressed). Your call either way. |
|
@pmitros are there changes to the RateXBlock itself that we should review (https://github.com/pmitros/RateXBlock)? I created https://openedx.atlassian.net/browse/TNL-3855 for the TNL review. |
|
There are minimal changes/cleanups to the XBlock itself -- mostly formatting for PEP8/pylint, and slightly more information returned in AJAX for testability: pmitros/RateXBlock#9 Sandbox is currently provisioning as ratexblock.sandbox.edx.org (see: http://jenkins.edx.org:8080/view/Ansible/job/ansible-provision/4329/parameters/ for status). |
|
Works on sandbox. Add 'rate' to advanced settings, and add "Provide Feedback" from the advanced menu. |
|
@pmitros I took a quick look at the xblock and have a few small UX suggestions:
Here is a quick screenshot of those ideas: Let me know if you have any questions or need more help! |
|
Your design is much nicer than what we have for the apparent goal of the tool, and follows better established UX patterns, but it doesn't quite line up with the instructor goals:
My recollection was that the version used in Dartmouth (late 2015 UX) had five faces instead of four, and had more meaningful question (although it is difficult to tell now the DT is gone). The MVP tool is relatively minimalist with two configuration options (the text above the Likert feedback section and the text above the faces). Internally, the tool has several bits of functionality which we'd like to expose in a future version (both of these are implemented, but not exposed in Studio as a user-facing feature):
I'd like to expose these once the tool has proven itself useful, figure out authoring UX, fixed initial bits of feedback from the user experience, etc. I'd like the UX to generalize to there. |
|
@frrrances Missed tagging you above. |
|
@pmitros jumping in here on the UX -- what I'm seeing from Frances's comments are that they are relatively low effort changes that make a substantial difference in user experience. I don't agree with your assessment that they go against the instructor's goals. I think putting the suggested text string changes as the defaults and letting course authors customized as they see fit would work well, because it will give instructors solid defaults while still allowing for more specific use cases. Also, even if we are asking a more specific question, I still think using stars instead of smileys and flipping the stars and text box is a good idea, to Frances's point of, if someone isn't going to type feedback, at least you get their general rating. I also still just hate the current smileys. :) Happy to talk more in person if it's helpful. |
|
Talking in person may be helpful. I've found in-person UX reviews to be much more helpful; you can actually get at what you're trying to do and end-user goals. I hate the current smileys as well, although they're a little better under Ubuntu than under Mac. They come from Unicode, so they are system-dependent. The current UX is a little bit unpleasant, and I'm not opposed to seriously reworking it. It's mostly how it is since we were comparing performance to DropThought, and wanted a near-identical clone, modulo lack of styling. But I do think star ratings solve a different problem than the one we'd like to solve. We'd like to give users a clear Likert scale, and many Likert scales don't really fit well into a star rating. I'll give a few other models I found online. This one kills the iconograpy altogether and replaces it with text: Some have slider. I like this less, since it doesn't give a clear definition to each rating. For example, for NPS, the text "definitely recommend" and "maybe recommend" are important: (Right now, we do have the text, but only on mouseover) And a few more: |
|
@pmitros got it, thanks for the context. If a Likert scale is really what we're going for, then we could use radio buttons. These have the built-in advantage that they're easy to make accessible, and then we don't have to worry about any cross-cultural differences with smileys or stars or whatever either. In the future, you'd also be able to give the course author the ability to customize the labels on the Likert scale if you use radio buttons. |
|
@explorerleslie The code currently has settings for:
This is not surfaced to the user, since Studio UX would take a bit of thought, and the MVP doesn't need it. From an accessibility perspective, the current UX behaves just like radio buttons. |
ae88776 to
f27b976
Compare
e8c60b3 to
ac04ab5
Compare
|
@explorerleslie @cahrens I believe this may be ready for review/merge, modulo one minor Studio styling issue. This includes a lot of improvements and debugging of the XBlock test framework. In particular, we can now validate the XBlock HTML as well (but not yet JS-side or Studio-side tests). We provide aggregate statistics to instructors on usage. Key flags of things to look at:
Changes to the block itself are shown in: |
ac04ab5 to
21ffc30
Compare
|
jenkins run all |
|
@pmitros It's not clear to me how much of pmitros/FeedbackXBlock#1 we need to review- it is a large PR. Is there a way to see what has changed since I last reviewed the RateXBlock? Note that you have some unit test failures. |
|
Oh. Hey. I didn't notice github didn't handle the file renames very well. I'll clean that up so it shows the changes. I'll make two PRs -- one with changes before the rename, and one with changes after, if that's okay. I did notice the test case failure. It worked fine on localhost. I'll look into it as soon as devstack is fixed. |
|
All the changes prior to the rename: pmitros/FeedbackXBlock#2 |
|
@cahrens It might still be easier to have all of the comments in the first PR (not the split up ones). |
|
@cahrens Would it be possible to get a quick review, even if just on the changes to the XBlock test code in this PR (without merging the big changes in the ProgressXBlock)? I'm trying to add tests to a (test-free) XBlock and running into some of the issues this PR fixes. |
|
I'm sorry, @pmitros, but I have a Friday deadline for a feature, and the TNL team is down two members. I will try to review on Friday afternoon. |
|
@cahrens Friday will be fine. I'll work on some analytics in the meantime. Thank you for both the update and the timeline. |
There was a problem hiding this comment.
For my own edification, why remove the point about moving the tests into the XBlocks themselves?
There was a problem hiding this comment.
There was logic, but in retrospect, not very sound. I'll add that back.
|
@pmitros I reviewed the testing part of this PR. I'm a bit confused about extract_block (how can it assume that the xblock HTML is in the first sequential, and is the only thing there?). Otherwise just some nits. |
|
I am assuming that. I think better error handling and documentation there would make sense. I'll do that. I'm building on a few hacks:
We need to fix both, since they're effecting many unrelated things, and the current code -- defining course structures in JSON -- is a stopgap until then. That code does not support general course structures, and we do assume that if we have multiple XBlocks, they are in independent learning sequences. I could put together a pile of code which would select from within a page by decoding/etc., but it'd be super-ugly code, and obsolete out-of-the-box. Once either of those issues is fixed, we're back in the running for the simple solution like the one in the comment. |
ee105c5 to
71fb02e
Compare
|
@cahrens Thank you for the prompt review! I implemented all of the changes to the test case code. The PR has two commits: (1) updating the test case. (2) Updating the FeedbackXBlock. I will make the changes to the latter, but it looks like it will take a while waiting on a11y, etc. Do you mind if I merge just the first commit (xblock_testcase.py) to master? That will allow other xblock work to follow. |
|
👍 to merging the test commit |
|
@pmitros can you rebase this PR? |
71fb02e to
4040298
Compare
|
@cahrens Rebased. |
|
👍 |
|
Closing due to inactivity. |
|
@pmitros I would like to know if you'r still working on the likert scale system please. |
|
@AmauryVanEspen Nope. Open edX is in a bit of an IP black hole, so I've moved onto other projects. Glad to advise if you're planning to work on it. |










This prepares the rating XBlock for production. This reworks the XBlock for testing and testability. We have already gone through accessibility review. I believe with this PR it will be MVP shippable.
At some point, we may want to do further work in order to: