In recent years, maximum difference scaling (MDS) analysis has gained a significant increase in popularity in market research. MDS represents a valid and better alternative when collecting preference measurements, being scale-free and providing more differentiation when measuring attribute importance than standard rating scales. With growing popularity there is a clear need to better understand the potentialities and limitations of MDS. While some work has already been done (among others, Orme 2005), there are still many unexplored areas, in particular regarding the impact of the key elements in an MDS design on the accuracy of results. With this paper we try to understand to what extent the number of versions, the number of tasks and the number of respondents impact on the results.