Predictive state representations (PSRs) are gaining a lot of attention in the robotics community lately because, in theory, they promise a powerful model that might be learned directly from data. But the practical application of PSRs remains a difficult procedure. There have only been a few learning algorithms proposed so far and only a small number of successful attempts where PSRs of complex domains were learned are reported. In this practical guide we aim to ease and encourage practical work with PSRs. On the one hand we provide the theoretical background and practical instructions to PSRs and on the other hand we identify possible questions that ought to be investigated to improve their practical applicability. To this end, we have re-implemented an algorithm that learns a PSR of a simulated mobile robot environment. We guide from the theory that is needed to understand the implemented algorithm to practice and provide in-depth information on all parts of our implementation. In a line of experiments we do not only validate former results that the learned PSRs are accurate enough to enable successful reinforcement learning, but further investigate the quality of learned models and the empirical performance of the algorithm itself. Therefore we apply the learning algorithm to environments of different complexity and examine the practical limits of the implemented approach. One of the main challenges we faced was the tuning of parameters. We found that tedious, environment specific fine tuning is needed to reliably learn accurate representations and thus investigate the influence of parameters on the quality of learned representations in several experiments in greater detail. The results are guidance for future work as well as they show possible problems that need to be tackled in order to improve PSR learning and make it applicable to complex real world domains.