Our current testing, deployment and release culture has attained some of the benefits of continous delivery, however we're still troubled with:
Regressions of functionality
Functionality that feels in-progress.
Our desire is to identify and improve:
Automated testing gaps that aren't catching regressions
How do we know we've written a test? When we need a new one?
How do we keep track of the edge cases that are covered?