Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs

Dashboard
Notifications
Mark all as read
Q&A

Post History

60%
+1 −0
#1: Initial revision by user avatar Monica Cellio‭ · 2021-01-13T15:51:21Z (11 months ago)
What user-facing things aren't covered well by automated tests (and should be tested manually)?
We have automated tests (good).  Automated tests usually don't cover *everything*, so when there's a new version we want to test (say, on the dev server), I'd like to be able to prioritize things that wouldn't be uncovered through our automated tests.

Maintaining a full catalog of test cases as seen through the user-function lens is impractical, but can we have a "punch list" of user-visible areas or actions -- things that we know are not covered, or not covered well, that human testers should be sure to poke at?

If I'm kicking the tires of something on the dev server to try to answer the question "is this ready to go?", what should I focus on?