What user-facing things aren't covered well by automated tests (and should be tested manually)? Question
We have automated tests (good). Automated tests usually don't cover everything, so when there's a new version we want to test (say, on the dev server), I'd like to be able to prioritize things that wouldn't be uncovered through our automated tests.
Maintaining a full catalog of test cases as seen through the user-function lens is impractical, but can we have a "punch list" of user-visible areas or actions -- things that we know are not covered, or not covered well, that human testers should be sure to poke at?
If I'm kicking the tires of something on the dev server to try to answer the question "is this ready to go?", what should I focus on?
1 answer
The really core features (posting, voting, commenting, editing) are pretty thoroughly tested; we only really need to test those exhaustively by hand if we're making significant changes to them - otherwise it's a reasonable assumption that if the tests passed then they're working.
It's as we get outside of core features that testing starts falling off. We have especially limited testing in moderator and admin tooling, because that's not something that most users will ever see.
We have no tests for the UI. Our existing tests essentially simulate a request and assert various conditions on the response, but there's no real way to test that the UI we've created actually submits an identical request. I always test any new UI I create before pushing it, but especially for major changes this should be a focus.
Writing tests is probably one of the easiest parts of the application to pick up, so if folks have ideas for test cases, it may well be quicker to write the test yourself and submit a PR. I wrote a post on how to create a test case.
0 comment threads