Testing ML Models – Experiences from building a model-testing product
Abstract: Testing AI/ML projects is in a similar stage today as was convention software development two decades ago. Independent testing was not very popular and wide-spread. Today ML-models are tested by developers (read data scientists) to whatever extent they think possible. Are they exposed to techniques of model testing beyond the usual metrics like accuracy and methods like cross-validation? Do they know enough about privacy leakage, security, bias and explainability? Can they use these techniques to improve the quality of their models? What are other tests they are missing out on? This presentation will explain some of these techniques, share the experiences of building a product that can help do all these, the industry trends and also how it is likely to shape up in near future.