I think test automation is effective if:

  1. It is properly distributed between levels (unit, integration, UI), so
  2. It takes a reasonable amount of time to run, to
  3. Provide reasonably accurate quality feedback about the product

Properly distributed between levels (unit, integration, UI)

Pretty much everyone knows about the test pyramid. The nice thing that comes as part of the parcel is that when you do add tests on all levels you also unintentionally create better-designed and maintainable software. And it is also probably the only scalable approach to assure that the test run….

Takes a reasonable amount of time

Let's say less than 30 minutes. So it is possible to do a test run after each commit or merge to master, which means tests are run often. It is crucial to be able to run as often as possible so problems don't get accumulated and at any moment we have…

Reasonably accurate quality feedback

Meaning that if we have failed tests it indicates a product problem and if we don't have failed tests it yet possible that the product has some bugs for some peculiar scenarios, but is catastrophe-level bugs-free and can be deployed on the spot at least to the pre-prod environment.

This definition is probably overly biased due to the environment I have worked and am working in, and also likely to change and evolve over time. I would be glad if somebody can give me better definitions.