Warsaw, Poland.
I obtained my master’s with honors from the Technical University of Denmark. Then, I continued my work on data science at the same university and obtained a PhD After a short postdoc there, I am now at Northeastern University, working with Professor Alan Mislove and Professor Christo Wilson on privacy and fairness in machine learning.
My research is centered around the behavior of individuals, and the interactions people have with one another and with computer systems. I apply machine learning and other statistical approaches to model spreading of diseases and information, human mobility and interactions, and to infer relationships and life outcomes of study participants. I measure the impact of one’s actions and social network on their privacy. Finally, I investigate algorithmic black boxes and study ways of detecting, measuring, and eliminating biases from automated decision making systems.
The bias in algorithms comes from many places: how the training data is collected and processed, the standards against which the algorithms are tested, optimizing for what we can measure, rather than for what the real goal of the systems is, etc. The field is maturing quickly, but there is so much more to discover.
I would like to make decision-making systems more transparent and understandable both for the people who operate them, and those affected by them.
Warsaw, Poland.
I obtained my master’s with honors from the Technical University of Denmark. Then, I continued my work on data science at the same university and obtained a PhD After a short postdoc there, I am now at Northeastern University, working with Professor Alan Mislove and Professor Christo Wilson on privacy and fairness in machine learning.
My research is centered around the behavior of individuals, and the interactions people have with one another and with computer systems. I apply machine learning and other statistical approaches to model spreading of diseases and information, human mobility and interactions, and to infer relationships and life outcomes of study participants. I measure the impact of one’s actions and social network on their privacy. Finally, I investigate algorithmic black boxes and study ways of detecting, measuring, and eliminating biases from automated decision making systems.
The bias in algorithms comes from many places: how the training data is collected and processed, the standards against which the algorithms are tested, optimizing for what we can measure, rather than for what the real goal of the systems is, etc. The field is maturing quickly, but there is so much more to discover.
I would like to make decision-making systems more transparent and understandable both for the people who operate them, and those affected by them.