AI in Popular Press

Two popular recent articles on applications of machine learning:

  • Million-Dollar Prize Hints at How Machine Learning May Someday Spot Cancer (Will Knight, May 9, 2017)
    • a copy of this article is here
  •  Machine-learning promises to shake up large swathes of finance (The Economist, May 28, 2017)
    • a copy of the article is here

 


Machine Learning for Text in the News (again): Finance

A short but interesting piece in The Economist this week entitled Machine-learning promises to shake up large swathes of finance, under a heading of “Unshackled algorithms” (located here).

Many of the usual observations and platitudes are contained herein, but I thought these quotes were notable:

  • Natural-language processing, where AI-based systems are unleashed on text, is starting to have a big impact in document-heavy parts of finance. In June 2016 JPMorgan Chase deployed software that can sift through 12,000 commercial-loan contracts in seconds, compared with the 360,000 hours it used to take lawyers and loan officers to review the contracts. [So maybe once again I am focused on one of the least remunerative aspects of a new technology…]
  • Perhaps the newest frontier for machine-learning is in trading, where it is used both to crunch market data and to select and trade portfolios of securities. The quantitative-investment strategies division at Goldman Sachs uses language processing driven by machine-learning to go through thousands of analysts’ reports on companies. It compiles an aggregate “sentiment score” based on the balance of positive to negative words. [Seems a bit simplistic, no?]

  • In other fields, however, machine-learning has game-changing potential. There is no reason to expect finance to be different. According to Jonathan Masci of Quantenstein, a machine-learning fund manager, years of work on rules-based approaches in computer vision—telling a computer how to recognise a nose, say— were swiftly eclipsed in 2012 by machine-learning processes that allowed computers to “learn” what a nose looked like from perusing millions of nasal pin-ups. Similarly, says Mr Masci, a machine-learning algorithm ought to beat conventional trading strategies based on rules set by humans. [The data point replicates, over the same timeframe, when Elijah Mayfield showed that off-the-shelf, open source machine learning could with days of work produce competitive results (for scoring essays)  the capabilities of decades-old rule-based systems (from e-Rater, Intelligent Essay Assessor and six others). See note below]

 

I would also note that such “supervised learning” machine learning applications that leverage NLP )natural-language processing tools, which are used in, but not by themselves good examples of, IA techniques) tools are now a standard “first stage” of Machine Learning that typically evolves toward some form of neural network-based improves, just as the “computer vision” example noted above did in subsequent iterations over the last five + years.

Good stuff.

for the Elijah Mayfield reference see:

  • Mayfield, E., & Rosé, C. P. (2013). LightSIDE: Open Source Machine Learning for Text Accessible to Non-Experts. Invited chapter in the Handbook of Automated Essay Grading.
  • Shermis, M. D., & Hamner, B. (2012). Contrasting state-of-the-art automated scoring of essays: Analysis. Annual National Council on Measurement in Education Meeting, March 29, 2012, pg. 1-54.

Data Science Bowl 2017 – more AI for medicine and medical images

I was interested to read in the the piece in the MIT Technology Review,

Million-Dollar Prize Hints at How Machine Learning May Someday Spot Cancer

A million dollar prize certain grabbed some headlines, but the details of the winning solution – more image annotations (e.g. more trained doctors / technicians), plus partitioning the basic problem into a) finding nodules; and b) diagnosing cancer), are both clear signposts to the future. Indeed, the future of low-dose CT scans is certainly looking stronger.  And while progress with machine learning, medical imaging, and diagnostic medicine is not always linear (or straightforward, as we read here), 3D imagines that capture relative tissue density and other characteristics clearly provide a highly construct-relevant feature set that is making advances in this are steady and promising (editorial: in a way that other work (e.g. is this argument convincing?) relying on indirect features and characteristics (computational linguistics in this case) is not yet keeping up…).

Since Google’s acquisition of the Kaggle, I have not taken a new look at the Google tool set for creating deep learning networks, but the promise of introducing a “semantic data layer” based on a semantic grammar approach to rubric construction might offer a promising path to better machine understanding of text and speech.

 

 


Listening to the Data – Four ways to tweak your Machine Learning models

Bias and variance, precision and recall –  these are concepts that, after a few months or maybe even a just a couple of weeks of crawling around in actual data, predictive models, and the study of where prediction and reality meet — begin to have an intuitive feel.  But it was nice to read recently a short piece that brings these concepts clearly into focus, and frames them in terms of model behavior.  This is something I will keep handy to share where my own jabbering on the subject is likely to be less clear and certainly less concise.  The source of the article was (via re-post) the KDNuggets blog, which is an excellent resource.

There are, perhaps unsurprisingly, many good “nuggets” on the KDnuggest blog / web site. And this latest item does a good job of explaining what is at some point intuitive to people who work with machine learning models regularly.  Perhaps this is particularly relevant to modeling and mining “text’ — the work I have been doing in Machine Learning — because it certainly is spot on. And this is more a way of describing how the math models the real world, and how the data is reflected in the math, so I expect this view is likely helpful to anyone modeling data.

The somewhat “click-bait” sounding title — “4 Reasons Your Machine Learning Model is Wrong” is only modestly apologized for with the “(and How to Fix It)” suffix, but makes me worry fake-aggressive, pretend-demeaning discourse could be among the worst forms of carry-over of 2016 into 2017.

I will instead remember that genuine aggressive, demaning discourse is worse… and continue to appreciate the sharing that sites like this do for the larger community.

Happy New Year!

 

 


TensorFlow is released: Google Machine Learning for Everyone

2FNLTensorFlow_logoGoogle posted information about TensorFlow —  the release of as open source of a key bunch of machine learning tools on their Google research blog here.

Given the great piles of multi-dimensional tables (or arrays) of data machine learning typically involves, and (at least for us primitive users) the tremendous shovel work involved in massaging and pushing around these giant piles of data file (and sorting out the arcane naming schemes devised to try to help with this problem is almost a worse problem itself),
grb_logo

the appellation of “Tensor Flow” as a tool to help with this is at first blush very promising. That is, rather than just a library of mathmatical algorithm implementations, I am expecting something that can help make the machine learning work itself more manageable.

I suspect that just figuring out what this is will cost me a few days… but I have much to learn.

 

 


The Confusion Matrix: Excellent Tool

As can be seen in the paper posted previously, I continue to find the “Confusion Matrix” an excellent tool for judging the performance of machine learning (or other) models designed to predict outcomes for cases where the true outcome can also be determined.

Even simple attempts at explanations sometimes fail — witness the Wikipedia entry — and since I find this so helpful in looking at Machine Learning model performance, as noted in the prior post, I thought I’d provide a brief example here (from the paper) as well as provide pointers to great web resources for understanding “Quadratic-Weighted Kappa” — a single descriptive statistic that is often used to quantify the validity of “inter-rater reliability” in a way that is more useful, or comprehensive, than mere “accuracy”, if less descriptive by nature than these lovely visual aids.

So here are two Confusion Matrices representing output from two different theoretical machine learning models (click to expand):


Sample Confusion Matrix for Model A2QWK-ex-B2

The point of these two model performance diagrams was to show that while the two models have identical “accuracy” (or exact match rates between predicted and achieved output), the first model has a more balanced error distribution than the second.  The Second model has a “better” quadratic weighted kappa — but also demonstrates a consisted “over-scoring” bias.  I think most people would agree that the first model is “better”, despite it’s (slightly) lower QWK.

 

—————-

 

And then lastly, the promised reference information for folks simply interested in getting a better grip on Quadratic-Weighted Kappa (excerpts from my paper posted earlier):

….

Two helpful web pages that can do quick Kappa calculations on data already prepared in the form of a Confusion Matrix can be found at http://www.marcovanetti.com/pages/cfmatrix and even more helpfully (this one includes quadratic-weighted kappa, and a host of other related calculations) at http://vassarstats.net/kappa.html.

Excellent and thorough definitions of Kappa, and its relevance for use in comparing two sets of outcomes for inter-rater reliability, can be found in many places. These range from simple, mechanical and statistical definitions (and some implicit assertions or assumptions that might be worth examination) to detailed examinations of various forms of Kappa (meaning also the linear and quadratic and other acknowledgements of the relationship between classification labels or classifications) and specifically the “chance-corrected” aspect of the calculation, independence assumptions and other factors that give more, or less, weight to the idea that QWK (or interclass correlation coefficient) is or is not a good measure for what we are trying to get at – the degree of fidelity between model outputs for a trained “scoring engine” and actual outputs from human judgment. See for example see:

Also worthwhile are the notes and discussion on the two Kappa calculation web pages / sites noted above.


Scoring Long-form Constructed Response: Statistical Challenges in Model Validation

SLFCR-SCMV-Cover20140901The increasing desire for more authentic assessment, and to the assessment of higher order cognitive abilities, is leading to an increased focus on performance assessment and the measurement of problem-solving skills, among other changes, in large scale educational assessment.

Present practice in production scoring for constructed response assessment items where student responses of one to several paragraphs are evaluated on well-defined rubrics by distributed teams of human scorers currently yields – in many cases — results that are barely acceptable even for course-grained, single-dimension metrics. That is, even when scoring essays on a single, four to six point scale (as for example was done in the ASAP competition for automated essay scoring on Kaggle[1]), human scorer inter-rater reliability is marginal (or at least, less reliable than might be expected), in the sense that inter-rater agreement rates ranged from 28 to 78%, with associated quadratic-weighted Kappas ranging from .62 to .85.

Said another way, about half the time, two raters, or human (averaged) scores and AI scoring engines, will yield the same result for a simple measure, while the rest of the time, the variation can be all over map. Kappa doesn’t really tell us very much about this variation, which is a concern, because “better” (higher) Kappas might also mask abnormal or biased relationships, where models with slightly lower Kappas might, on examination, provide an intuitively more appealing result. And for scoring solutions that seek to use more detailed scoring rubrics, and to provide sub-scores and more nuanced feedback, while still solving for reliability and validity in overall scoring, the challenge of finding the “best” model for a given dataset will be even greater.

I have written a short paper that focuses solely on the problem of evaluating models that attempt to mimic human scores that are provided under the best of conditions (e.g. expert scorers not impacted by timing constraints), and addresses the question of how to define the “best” performing models.  The aforementioned Kaggle competition chose Quadratic Weighted Kappa (QWK) as a means of measuring the conformance of scores reported by a model and scores assigned by human scorers. Other Kaggle competitions routinely use other metrics as well[2], while some critics of the ASAP competition in particular, and of the use of AES technology in general, have argued that other model performance metrics might be more appropriate[3].

[Update: at this point the paper simply illustrates why QWK is not by itself sufficient to definatively say one model is “better” than another by providing a counter example and some explanation of the problem.]

As a single descriptive statistic, QWK has inherent limits in describing the differences between two populations of results. Accordingly, this short note will present an example to illustrate the extent of these limitations. In short I think that – at least for a two-way comparison between a set of results from human scorers and a set of results from a trained machine learning model trying to emulate human scorers, the basic “confusion matrix” that shows a two dimensional grid with exact match results on a diagonal, and non-exact matches as outliers provides an unbeatable visualization of just how random, or not, the results of using a model might look against a set of “expert” measures.

Future efforts will considers suggested alternatives to QWK, or additional descriptive statistics that can be used in conjunction with QWK, hopefully leading to some more usable and “better” criterion for particular use cases and suggestions for further research.

Feedback welcome!  Full document is linked here: SLFCR-scmv-140919a-all

 

—————–

[1] See https://www.kaggle.com/c/asap-aes

[2] See https://www.kaggle.com/wiki/Metrics

[3] See particularly section entitled “Flawed Experimental Design I”. from Les C. Perlman’s paper at http://journalofwritingassessment.org/article.php?article=69