We have automated tests (good). Automated tests usually don't cover *everything*, so when there's a new version we want to test (say, on the dev server), I'd like to be able to prioritize things that wouldn't be uncovered through our automated tests.
Maintaining a full catalog of test cases as seen through the user-function lens is impractical, but can we have a "punch list" of user-visible areas or actions -- things that we know are not covered, or not covered well, that human testers should be sure to poke at?
If I'm kicking the tires of something on the dev server to try to answer the question "is this ready to go?", what should I focus on?