做机器学习的时候要想清楚自己的目标是什么。是获得模型,还是获得预测能力。
前者是统计学家,后者是数据科学家。
If your model doesn’t have the same performances on the training set and in the live environment is not a matter of trust, but a problem either in your dataset or in your testing framework. Trust is built on performances and performances on metrics: design the ones that work for your problem and stick to them. If you’re looking for trust in interpretability you’re just asking to the model questions you already know the answers, and you want them to be provided in the exact way you are expecting them. Do you need machine learning for building such a system? The need of ML arises when you know questions and answers but you don’t know an easy way to get from one to the others. We need a technique to fake the process, and it might be that an easy explanation for it doesn’t even exist.
(略)
I’m not a fanboy, and the more I know about machine learning, trying to build some real products out of it, the most I loose interest in this kind of discussions. Probably, the only useful thing about ML is in its ability to replicate processes that aren’t easy to describe explicitly: you just need questions and answers, the learning algorithms will do the rest. Asking for interpretability as a condition for real world usages is undermining the foundations of the whole field. If the trained model has good performances and it’s not interpretable we are probably on the right track; if it’s interpretable (and the explanation is understandable and replicable) why loosing weeks and GPU power? Just write some if-else clauses.
Source: If it’s interpretable it’s pretty much useless. – Massimo Belloni – Medium