Preprocessing by means of data reduction rules is a common approach for the algorithmic treatment of combinatorially difficult and compute-intensive problems. Typically, the data reduction approaches used so far have an ad hoc character and are not systematically developed. Theory construction has not taken place for this universally applicable method so far. With the concept of the problem kernels originating from parameterized complexity, it is now for the first time possible to undertake a systematic, theoretically supported investigation on the effectiveness of data reduction techniques. In addition, these theoretical investigations can—as shown in first case studies—very effectively guide the search for new, better data reductions. With algorithm engineering, practical tools can be developed, and the actual capability of data reduction rules in most diverse applications can be examined. Eventually, this opens up new application potential for this basic algorithmic technology.