Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Post History

#1: Initial revision by user avatar Monica Cellio‭ · 2021-01-13T15:51:21Z (over 3 years ago)
What user-facing things aren't covered well by automated tests (and should be tested manually)?
We have automated tests (good).  Automated tests usually don't cover *everything*, so when there's a new version we want to test (say, on the dev server), I'd like to be able to prioritize things that wouldn't be uncovered through our automated tests.

Maintaining a full catalog of test cases as seen through the user-function lens is impractical, but can we have a "punch list" of user-visible areas or actions -- things that we know are not covered, or not covered well, that human testers should be sure to poke at?

If I'm kicking the tires of something on the dev server to try to answer the question "is this ready to go?", what should I focus on?