Listening to the Data – Four ways to tweak your Machine Learning modelsPosted: December 31, 2016
Bias and variance, precision and recall – these are concepts that, after a few months or maybe even a just a couple of weeks of crawling around in actual data, predictive models, and the study of where prediction and reality meet — begin to have an intuitive feel. But it was nice to read recently a short piece that brings these concepts clearly into focus, and frames them in terms of model behavior. This is something I will keep handy to share where my own jabbering on the subject is likely to be less clear and certainly less concise. The source of the article was (via re-post) the KDNuggets blog, which is an excellent resource.
There are, perhaps unsurprisingly, many good “nuggets” on the KDnuggest blog / web site. And this latest item does a good job of explaining what is at some point intuitive to people who work with machine learning models regularly. Perhaps this is particularly relevant to modeling and mining “text’ — the work I have been doing in Machine Learning — because it certainly is spot on. And this is more a way of describing how the math models the real world, and how the data is reflected in the math, so I expect this view is likely helpful to anyone modeling data.
The somewhat “click-bait” sounding title — “4 Reasons Your Machine Learning Model is Wrong” is only modestly apologized for with the “(and How to Fix It)” suffix, but makes me worry fake-aggressive, pretend-demeaning discourse could be among the worst forms of carry-over of 2016 into 2017.
I will instead remember that genuine aggressive, demaning discourse is worse… and continue to appreciate the sharing that sites like this do for the larger community.
Happy New Year!