I really don't think the tests should subtract points from the total score for questions answered incorrectly. It is especially frustrating when the questions that knock you back are poorly phrased or the "right" answer is simply wrong.
It should be penalization enough to simply not get points for missed questions.
For example... If I am considered an expert up until I miss something... missing that one question does not make me less of an expert. I got all the answers up to that point correct. How does missing one question lower my TOTAL knowledge of the product? It just means I don't know any "more" about it. That's all.
If you want a more accurate test of the product knowledge, how about making it so that after three missed questions in a row, the test is over and you are locked out of it for 24 hours?
EMPLOYEE0Hi Sherry, thanks for the feedback. I appreciate that it can be frustrating when you lose points for getting questions wrong, but the negative points are pretty fundamental to the way Smarterer works.
The negative points are a reflection of the opposite case of what you're talking about. What if you happened to get a streak of questions that, just by random chance, you happened to know the answer to? What if you were subsequently unable to answer any other question correctly in the entire test? Are you truly an expert then? Our scoring algorithm is based on this model, that there is a chance that even someone with a low skill may be able to answer a hard question correctly and sometimes someone with a lot of skill will get an easy question wrong. The positive and negative adjustments are used to "balance" out these random chances and after enough answers arrive at the correct score for someone.
Your point about bad questions is a very good one. We're working hard to find the correct balance between crowd sourced and crowd edited questions while still maintaining fairness in the test.
I'm not sure if you noticed but when users add new questions to the system they are put into a "nursery" state. Answering questions in the nursery state won't cause you to lose points when you get them wrong. If enough people flag the question as poorly written or inappropriate for any reason then it won't go "live" and be included in the negative score adjustment. Also, when a question is in this "nursery" state, anyone can edit it to improve the wording or to fix mistakes.
Even with this system in place, bad questions do occasionally still make it into the "live" pare of the test and cost you points for wrong answers. We're still trying to improve the experience so that people will actually flag bad questions so that fewer people have a negative experience with them. We want everyone to be able to help make the test better by flagging and/or editing poorly written questions and for everyone to feel the test is completely fair, but we're not quite there yet (although we're getting better all the time).
Thanks again for the feedback, we appreciate it.
I vehemently disagree.
I've been in the printing and graphic design business for 20 years, was involved in the beta test of one of the programs I tested on and *am* at the Adobe Expert level on the programs I was testing. I've trained people to use these programs and If I can't score above basic, there is something severely wrong with the questioning.
Primarily being that most of the questions that are not even applicable to anything other than what I call "trivial knowledge."
Being able to tell you there are 30 pallets doesn't indicate you know how to use them. Questions on program keyboard shortcuts when any designer who knows his trade has created their OWN keyboard shortcuts using the extensive customization of the program. It is like they are being penalizing them for knowing how to get the most of their program! A Mac based question for a PC user or a question about a feature of another program not even being tested is irrelevant.
NO testing system at Adobe or any other official class for these products grades a test using negative scoring for incorrect answers. It should be impossible to get a negative score for any test and yet you've managed to create one by allowing people to take a test they can't stop until the site allows them to, for an undetermined amount of questions hoping to raise their score back to where it was. This is a "game show" mentality of grading, not an educative one.
In my own frustration at having to stop after almost every question to submit the correct answer because the question given was poorly worded or just was wrong; I could see a person simply stopping for fear of losing what little score they have left. Why would any sane person bother to keep answering questions and helping *you* police your site... if it just lowers their score? There is absolutely no incentive to take these tests past intermediate levels.
I had intended on using the site for prospective job applicants but the only thing it really tests is patience. While that, in itself, is a worthy attribute to have in a designer, I would rather they know how to do things that actually aids in their job... not in a stint on Jeopardy.
EMPLOYEE0Hi Sherry, again thanks for the feedback. We're just going to have to agree to disagree I think. :)
Are Smarterer tests perfect? No, but they're getting better all the time. There are definitely experts who currently aren't going to do well on our tests, just like an SAT isn't a perfect measure of scholastic aptitude: some people don't test well in that environment. For the large majority of people, though, our data is showing that the current system of multiple choice *is* working and working pretty well. Some tests are definitely more mature than others though. The system is adaptive and built on "machine learning" algorithms so it's also getting better all the time.
One of our biggest problems, the one we're spending the most energy on currently, is what I think burned you, the issue of question quality. We're still working on improving the feedback system as well as how the scoring for "untrusted" questions works. It's tough to find the correct balance of user submitted questions and how to decide when those questions are 'worthy' of being included in the full scoring portion of the test.
Some people seem to be offended by our use of user generated questions. We're using crowd sourced questions not because we're looking for 'free labor' but because we feel like the best reflection of the current state of the art in a body of knowledge emerges from those in the field, those who actually use the skills. We want anyone to be able to generate a test for a tool they think is important, not just a handful of tests preselected by a self appointed committee. We want the tests to be 'owned by' the people, and because of the machine learning grading mechanism it's effectively graded by the people too. In this way it's not some faceless set of self-appointed experts passing judgement but rather a reflection of the total pool of test takers that think that skill is important. The skill levels actually represent (in a very real sense) how you did compared to the thousands of other people who've also taken the test.
We could do things the 'old fashioned' way and pick 5 or 6 skills and pay experts to write test banks. That might initially give us better written questions, but we'd have to decide up front what skills we thought were important. Additionally we'd have to hand pick who we thought were actually 'experts'. Those questions would also become stale very quickly with the rapid evolution of skills in today's world. How long does a question about Facebook remain valid? How quickly do they change their interface? We're trying to do things in a new (and we think better) way, and we're looking for people who are passionate about this approach to help us make it a reality.
The way we score tests is also not arbitrary and is based on a lot of psychometric theory on how to measure latent ability in people. (Check out 'item response theory' if you'd like a sense for the kind of research I'm talking about). It's another of our challenges because the way the system works is statistical in nature and involves some complicated moving parts that mean the test results aren't as easy to explain as "8 out of 10 = 80% on the test".
So as for "negative" points for questions, I think you may have a slightly unclear view of how the test is scored. This is another problem we deal with which is that we're not a "classical" test. There are no fixed points for questions (positive or negative). The amount your score moves depends mainly on your current skill level and the difficulty of the question you just answered. If you're rated at a low skill level and get a hard question correct, you get more points than if you are rated at a high skill level and get that same question correct. The opposite is also true, if you're at a High skill rating and get an easy question wrong you lose more points than if you were at a Low skill rating and got that same question wrong. You can never get more than an 800 or less than a 0 and the system will try to find your natural 'equilibrium' point.
Here are some important points:
* The test is adaptive, it's trying to match you with questions that will challenge you.
* Two people usually won't see the same test.
* It's a continuous measure of skill, the test is never 'done', you can take as many or as few questions as you want but after a certain number of questions your score will 'settle' at an equilibrium point.
* Your score is adjusted by a variable amount after each question based on the difference between your skill level and the level of the question.
* It's actually _impossible_ to get a negative total score or a score greater than 800.
Please let me know if you'd like more explination, I can go into more details if you'd like.
I totally agree with Sherry. I have been using photoshop professionally for 10 years and am only scoring at proficient. This is mainly due to an immense number of trivial questions and keyboard shortcut questions - there are 5 ways to do everything in such a program and not knowing every single one at the drop of a hat does not make me less of an expert. Not to mention Sherry's point that you normally customize your workspace and do things automatically, so do I remember every file name? No. Can I get there in seconds while using the software - yes!
Another example of why this site does not represent reality, is that I'm a master in 3 social media categories, something I've never done professionally, and this was because the tests went on long enough with enough repeat questions for me to build up the score. The photography test on the other hand, had hardly any questions and again after being a photographer for 10 years I'm merely proficient with no way to improve until more questions are added?? Does not knowing the history of photography make me a lesser photographer?? Ridiculous.
I thought this site was amusing but I'm horrified to see employers are actually using it as screening and requiring certain scores to get to the next stage of an interview. I hope it's being made clear to them that these scores have NO professional bearing on what one is capable of, and positioning this site otherwise is basically fraud. Not to mention there is nothing stopping someone from having another person take the test for them, or opening up another account to take the test again.
By the way - It's not that I'm totally against being penalized for wrong answers, but I have yet to see reasoning for why some answers are worth hundreds of points and others zero - and all these things combined make me feel this is a very unfair way for employers to be looking at your potential skill set.
I generally agree with being penalized for wrong answers, but I think it's taking it too far when a question has multiple correct answer.
There are a good many questions where the correct answer is "all of the above" or "a and b, but not c." On questions like this, there should be a way to mark certain options as "not wholly incorrect." Choosing one of those options would not earn you any points, but would also not penalize you. I've been caught on numerous questions where there was a very short time limit, I saw a correct answer, and clicked it before reading on to see that option 4 was also valid, and option 5 was "both x and y." Sure, I don't deserve points for a correct answer, but I shouldn't be penalized, since my answer also wasn't *wrong.*
Hi Zoe and Dan,
Thank you both very much for your feedback. We understand that poor quality questions (prime example: the keyboard shortcut questions) which is why we are continuously working to review/edit/remove flagged questions.
That said, I'd like to just quickly highlight some things that Michael Kowalchik said above. Smarterer tests are designed to be "owned by the people", meaning that we rely on users to add questions and flag others. Tests are always improving as users flag and comment on bad questions, allowing us to review them. That said, if at any time you come across a question you feel is poor quality, feel free to flag it and add a note so that we know exactly how to make it better. Also, if you would like to improve your score, you may add questions of your own to the tests and receive an author bonus.
With regards to the way questions are scored, the number of points that are granted or deducted are based on many factors. For example, the calculated difficulty of the question that you answered and your current skill level are both important in determining how many points you wil receive for a correct answer.
We hope that over time, as users continually add questions and flag others to allow us to edit them, the tests will improve and adapt. We are also looking to add more features to the site, so be sure to check back on your tests periodically to see if more questions have been added for you to answer.
I hope this helped clarify some things and I'd be happy to answer any other questions you may have. Thank you again for your comments; feedback like this really helps us as we work to improve Smarterer!
I love how this is marked "Solved" when we agreed to disagree. How about a new category of "Impasse Reached"?
All in all, I'm not impressed with the site. I, too, am rather horrified to see employers actually using the site as a hiring tool. It's like leaning out the window on Cash Cab and trusting the Man on the Street's answer because his hat makes him look "Smarterer". Whatever.
As my smiley indicates: I'm over it... and finished with this site.