Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Notes on TESTING-SUCCESS-FACTORS.md #35

Open
tomRedox opened this issue Apr 9, 2016 · 1 comment
Open

Notes on TESTING-SUCCESS-FACTORS.md #35

tomRedox opened this issue Apr 9, 2016 · 1 comment

Comments

@tomRedox
Copy link

tomRedox commented Apr 9, 2016

Hi,

I'm really grateful to you for the excellent testing factors article. I thought some feedback might be useful to you.

To give you an idea of my experience and competencies: I'm fairly new to JS, but I've been coding for 10 years and I'm reasonably experienced in writing unit tests (but not end-to-end tests) in C#. I practice TDD for all of my business logic code and most of my other code. As well as coding I am responsible for designing the systems and requirements gathering from our clients and I do that on a very regular basis.

I'm currently working on the MVP for our first SaaS product which is using Meteor 1.3, React and Redux.

The goal for my visiting this repo was to get continuous testing running in my IDE (Wallaby) for both my business logic code and the business logic parts of my UI code and then to look at how to automate end-to-end testing and UI testing.

My notes on the Testing Success Factors Doc:

General

Terminology

You are assuming the reader knows quite a lot of terminology. Because a lot of these terms are used so loosely, it might be worth defining what the meanings of the following items in the context fo this article (or having a terminology page):

  • Test Script
  • Feature File

The running example

The test script example at the top of the example (the Nightwatch one) is from the perspective of a user, whereas the example later in the article is from the perspective of Google and then switches back to the user perspective. The article does state that, but it means that when you scan the article and just compare the two versions of the tests it's easy to assume they are supposed to represent the same test, but actually they represent subtly different things - I found that confusing at first. I think it might be more helpful to have a consistent test so the reader can see the evolution more easily. In the second lesson the natural language script then shifts back to being from the user's perspective.

I think the example from Google's perspective may not be the best choice, you're assuming the reader knows that Google caches stuff and the caching is not perhaps not really a business domain requirement anyway (the time it takes for results to be shown is, but the caching is an implementation issue maybe?)?

Lesson 1

I found the test names at the end of Lesson 1 a bit confusing too: The first one 'Google Index updates cached pages' I don't really understand, would that be clearer as something like describe('The search results shown to the user')

The second one, 'User searches after the index', doesn't really follow the naming convention I'm used to and ignores the 'it' prefix - I wonder if it would be better written as it('includes Wikipedia - Rembrandt if Wikipedia has already been indexed'

In general is it preferable to have test titles capture the whole spec, than then relying on additional comments which are likely to not get updated and so gradually become out of date or contradictory to the test title?

Lesson 2

Would benefit from a paragraph at the start explaining what Cucumber is and what a feature file is.

The wikipedia DDD page might be more of a hinderance than a help to people who are new to this, it makes DDD look fairly complex. I wonder if there is a good beginners guide that could be linked instead?

If you are interested in step reuse and the readability of your testing codebase, you can achieve that through proper software engineer principles at the automation layer, thus creating a clear delineation between the natural domain language, and the test automation code that verifies the domain language is being fulfilled by the application

A good example of the above (or link to one) would probably really help at that point, I think I know what the above means, but it took a good few read-throughs.

Lesson 3

The pyramid: It might be worth saying that pyramid effectively works from the top down, i.e. Service layer tests test code that has also been tested in the Unit test layer, and UI tests (in the context of the diagram) test code that has been tested by both the Service and Unit tests.

They refactor the AccountHolder and associated unit test to store the balance in a nested field checkingAccount.balance instead of balance. They run all the unit tests and they pass

If this article is also aimed at non-JavaScript developers it may be worth mentioning that this is a scenario that's less common in a strongly-typed environment (as changing the interface of AccountHolder would have broken other unit tests if JS was strongly typed).

An example of what a domain integrity actually looks like would probably be helpful in this lesson.

Lesson 4

Looks good :)

@khalidadil
Copy link

khalidadil commented Apr 19, 2016

I noticed a minor typo in the last paragraph on the TESTING-SUCCESS-FACTORS.md file. In:
If you test your domain through the UI only, then changes to your domain will be costly and over time, this sill

the intent for the last word may have been will.

ghost pushed a commit that referenced this issue Apr 19, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants