I always do a lot of homework to find out as much as I can about the product before applying for a road test and I won't apply if I don't think it will be enjoyable and useful. It would be rare for me to come across something unexpectedly bad. This process will naturally result in a lot of high scores, but I think the system is okay as is.
It is more common these days to use a 5 star system, (which is actually the same, because with half stars, you get 10 levels), but stars are more graphical and it is easier to get an idea of the overall score at a glance. I just tried rating your blog with the 5 star system - some would say consistent interface is important.
It might also make it easier to transfer the score data to a product page.....
1 of 1 people found this helpful
At first glance, it appears that from 1 thru 7, the product is taking a hit. Which leaves 8 thru 10 to give it a "good" rating of sorts.
A five-star rating system seems like a good idea. The six categories that are rated 1-10 might not be applicable to every product. For instance, not every product comes with demo software. And support materials are either available or not.
I was going to make this point too. The product being tested is not always applicable in all categories. In the past I have tended to mark those sections as a 10, ie. don't penalise the score.
Sometimes documentation is provided but at the quality they might as well have left it out as the feeling you get from such bad documentation is worse than no documentation at all..
3 of 3 people found this helpful
It's great that there is a guideline as some reviews are marked high, but the comments would suggest otherwise.
Likewise we've seen low marks with nothing to substantiate why they are low.
IMO you guide is a little biased towards the negative end of the scale.
I realise we should be objective and professional, but I'd prefer to see something where 5 is neutral.
Lower than 5 needs work with 1 being "I would not recommend buying it" style.
10 would suggest no improvements or things that you'd like to see included.
For me reading the reviews, I don't really pay a lot of attention to the score.
It's the comments that help me decide if I should purchase (or want to purchase) the product.
There was comments about what if the product has no demo software, and I'd like to think that the ability to remove that in the final score was available.
I'm not sure what you'd replace those two items with, or that the final tally is a % instead of a number.
In each of these reviews there is no opportunity to indicate "Would you recommend this to others" and this can be a good indicator of it's appeal.
It could be priced very cheap but has great support and needs work, but it is still a good product to buy.
I would be in favour of dropping the rating to 5 stars.
5 much better than expected
4 better than expected
3 about what I expected
1 a lot worse
but I note that where 5 stars are used (eg Amazon) there are way more 4 and 5s awarded than there should be.
I view the rating as a little icing on the cake - it's the substance of the review that counts - and to whom it may concern - I almost never look at any video that is more than 2 minutes long.
Personally I would say keep it simple.
The example questions above can better be answered with yes/no or NA, rather than with a score.
What about calculating the score by the number of yes divided by the number of yes plus the number of no.
In that case non appropriate questions does not influence the score.
Additional you can add an overall score about ones satisfaction with the whole product on a scale of 5.
The rating proposed by michaelkellett sounds appropriate.
summing up the two gives a total score between 0 and 10:
I too am in favour of a 5 star rating system as simpler to use.
However, ratings only become really meaningful as a comparator if the questions themselves are phrased correctly to reflect the common definition of the rating system.
E.g. "Product performed to expectations". Well, that really is a YES/NO answer. Not so. It is only if the person selects NO would you want to know more. So, in this case if we translate into a 5-star rating system, we would have different meaning here to the others, as in 5 stars equates to "meeting expectation" and anything less than 5 stars is the degree to which it failed to meet expectation.
Hence, if questions are left as is, then it you may be better to provide a description of the ratings for each question to ensure these are understood by all and to ensure better consistency in response.
The point system proposed surely is a great improvement, as it establishes a clearly defined rating criteria, making it a reference for all.
The problem I see with it is that most people are accustomed either to "academic-based" scoring systems, like the ones where 10 is excellence, 5 or 6 is sufficient, and anything less is unsatisfactory, or to "expectation-based" rating systems, where 10 is given when the product/service meets the expectation and anything less than 10 denotes some negative, and if you define a different rating system, unfortunately people tend to "fall-back" to the system they are most used to, creating some ambiguity in the valutation.
For a 5 star rating system, despite the way the star criteria are set, the most common interpretation is as a "meeting expectation" feedback system, where 5 stars = expectation fully met (NB: not necessarily meaning "excellence"), and anything less than 5 star usually has some negative element in it. Expectation are normally built by gathering information beforehand (i.e. reading the marketing and technical documentation in our case).
Personally, I too would go for a 5 star rating system, something like:
5 - Expectation fully met
4 - Most important expectation met
3 - Basic expectation met
2 - Basic expectation NOT met
1 - No expectation met
0 - Not rated (impossible to assess)
Besides being visual and very simple, this kind of rating is also very common, therefore most people are already accustomed to it.
I have to disagree with you here Fabio, one of the concepts of modern quality assurance is the idea of "Surprise and Delight" - the feeling the customer has when the product does something good that they did not expect. The scoring system should allow for this possibility.
For example, in the 70s when I was involved with a HiFi company it was not usual to provide connectors with an amplifier. We packed power leads for the switched mains outlets, speaker connectors and some phono plugs, the idea being that it didn't cost much but really scored a hit with every customer who would otherwise have needed a ttrip to the shops before they could try the new toy.
I think this should have earned an extra star !
(Recently the "Surprise and Delight" concept has got a bit distorted and morphed somewhat into promotional activity - but the original concept is still valid.)
I agree, "Exceeds expectations" should be an possibility.
But I would want the middle option to be a "it's ok", "can be used but does not fill me with joy"
Perhaps I didn't explain myself well. Although I can see where you are coming from, and in principle I agree with you, my point was about what is in place now and how it is used. Unfortunately when you look around, just about all the star rating systems use the top rating for meeting expectation, and any less of a 5 rating is interpreted as somehow negative. Element14 could do differently, and use the "exceed" expectation for the 5 star, but by doing so it would generate ambiguity in the valutation, as a product meeting all the expectation would have to score only a 4 star.
I suppose what I'm trying to say is when choosing a rating system, I think we should take into account how people are using other feedback systems out there, so that the system is more familiar and the valutation is less susceptible of interpretation.
I agree that being different from other systems could be a problem - perhaps we shouldn't use stars but some other symbol ..........
It is very tricky - in the end one has to read the text to get a real understanding of what the reviewer intends.
I too consider the top score as more of 'delights the customer' rather than 'no room for improvement', because technology always evolves.
There are cultural differences too, for example I sometimes see the 5-point scoring results for presentations, and invariably the score is higher presented in the US, whereas it is always slightly lower in Europe. Or there is the one mean person who scores 1/5 for everything because they attended the wrong presentation for them, and the others scored 5/5 : ) So the 5 points or stars means something different depending on the audience. With the product reviews some are scored extremely low but then it becomes apparent from the text that the user didn't apply the product to the same use-cases as expected, or had some difficulties which are unique to him/her and would not generally apply to that product, e.g. faulty item.
I tend to agree with the comments that having an "exceed" or "surprise and delight" option is a good idea as that gives balance (i.e. upside and just downside).
However, I would suggest maybe rephrasing some of the questions to make it clearer or have an intro explaining how scoring should be carried out.
So for example "Product performed to expectations" should be rephrased to something like "Degree to which product exceeded or failed to perform to expectations". The "Degree to which..." is now the key focus in the scoring and this part is now common to all questions and could be used as header, or something similar.
2 of 2 people found this helpful
When we evaluated proposals we used a simple color code.
Red = Did not meet requirements.
Yellow = Met part of the requirements.
Green = Met requirements.
Blue = Exceeded requirements.
This system made it easy to summarize everyone's review and come up with a consensus rating for each item.
Only after this process was the cost evaluated as only the green proposals were considered.
For any RoadTest Review, the reviewer not only provides his or her personal take on the product being reviewed but also a numbered ranking (10-point scale) for 6 different questions.
Here is a random example:
My gut feeling is that I haven't clarified well enough what the 10-point scale means. As a result, there may be a lack of certainty when giving a grade. In addition, a lot of RoadTesters grade these questions in similar ways. So, I'd like to revisit our grading system. To start here's my idea:
- 10 points: Outstanding
- 9 points: Very Good Satisfaction
- 8 points: Good Satisfaction
- 7 points: Adequate but had to work through some programs
- 6 points: Needs Work
- 5 points Barely Satisfactory
- 4 points: Below Average
- 3 points: Unsatisfactory
- 2 Points:Totally Unsatisfactory
- 1 point: Time to Rethink
I'd like to get the RoadTester Group's opinions on the rating system. Do you think it needs more clarification? Keep it the way it is?
Go ahead and suggest your own. Perhaps the problem is that we need to ask different questions?
RoadTest Program Manager