Ta spletna stran hrani piškotke, da bi vam zagotovili boljšo uporabniško izkušnjo in popolno funkcionalnost te strani.

Analitične piškotke uporabljamo s storitvijo Google Analytics, samo z vašo privolitvijo. Sprejemam Zavrnitev Več informacij
Arhiv predavanj

Tackling Offensive Content in Machine Learning

Machine learning models, even when delivering state-of-the-art performance, can produce potentially offensive, inappropriate or toxic content and reinforce stereotypes or project social biases. Offensive behaviour can inadvertently be caused as part of any stage of the machine learning workflow, including data sourcing, preprocessing, and model training. The topic of offensive content is especially important in the field of Natural Language Processing (NLP). This session covers techniques and approaches for identifying, quantifying, and mitigating offensive content in language models (used in autocorrect, smart compose, and translation). The demo will feature a hands-on Python exercise with Azure Machine Learning where we leverage popular machine learning frameworks for detecting and mitigating offensive content in language modeling.

Tadej Magajna

Microsoft d.o.o.

Tadej Magajna is a former lead machine learning engineer, former data scientist, and now a software engineer at Microsoft. He is the author of the recently published book "Natural Language Processing with Flair". He currently works in a team responsible for language model training and building language packs for products such as Microsoft SwiftKey and Windows. He tackled problems like NLP market research, public transport bus and train capacity forecasting, and finally, language model training in his current role.