This article illustrates a respondent-friendly approach to preference elicitation over large choice sets, which overcomes limitations of rating, full-list ranking, conjoint and choice-based approaches. This approach, HLm, requires respondents to identify the top and bottom m items from an overall list. Across respondents, the number of times an item appears in participants' L (low) list is subtracted from the number of times it appears in participants' H (high) list. These net scores are then used to order the total list. We illustrate the approach in three experiments, demonstrating that it compares favourably to familiar methods, while being much less demanding on survey participants. Experiment 1 had participants alphabetise words, suggesting the HLm method is easier than full ranking but less accurate if m does not increase with increases in list length. The objective of experiment 2 was to order US states by population. In this domain, where knowledge was imperfect, HLm outperformed full ranking. Experiment 3 involved eliciting respondents' personal tastes for fruit. HLm resulted in a final ranking that correlated highly with MaxDiff scaling. We argue that HLm is a viable method for obtaining aggregate order of preferences across large numbers of alternatives.