facebook tracking

Industrial Ph.D. Candidate in Explainable Predictions

About Peltarion:
Peltarion’s mission is to make AI usable and affordable for all companies and organizations. We are creating the next generation’s AI platform for developing, managing and deploying deep learning systems at scale. As a fast-moving company, we are constantly looking for great new talent who can help us develop and conduct scientific research to reach our goal. Peltarion has received funding for a Ph.D. candidate position at EiT Digital; you can find more details about the program at their website.

About the role:
If you are interested in pursuing a Ph.D. degree in AI, and want to work with some of the best AI researchers in the field, this is the role for you. It will be a five-year position, with 80% of your time dedicated to the Ph.D. project and 20% working with the machine learning team at Peltarion (a.k.a. The Menagerie). The research focus for this position will be in the field of explainable predictions and model understanding.

Description of the research:
As AI models are deployed into increasingly greater parts of society, the questions of model “explainability” and the related concept of “interpretability” become paramount. From a legal point of view, GDPR states that individuals have the right to know the reasoning behind a decision that has affected them adversely, even if the reasoning is purely algorithmic. In addition, insight into a model’s decision process may reveal non-obvious bias in its decision-making.

Explainability is also important from a technical point of view when “debugging” models, and from a social perspective as a way to build trust in the model’s decision from a user’s point of view. Some work has been done on model explainability, but the field is still immature and no standard terminology to describe different types of explainability has been established yet. Explainability methods have been proposed for numerical (tabular) data, image classification and text analysis, but little work has been done on explanation methods for models that use many data modalities (e.g., both text, numeric and image data together).

This Ph.D. project will be aimed at research questions related to the problem of multi-modal explanation methods and model understanding, to ensure safe and trustworthy use of AI in our society.

- You have an educational background in computer science, mathematics or a related scientific discipline. Professional experience in relevant fields is a merit. We will look for the following skills in your resume:
- University master’s degree or equivalent working experience in computer science, mathematics or related field
- Theoretical understanding of as well as hands-on experience with modern machine learning techniques
- Skills working with data and machine learning using Python, R, Matlab or similar languages and tools
- Excellent communication skills, in both written and oral form, to share learnings with the rest of the team as well as externally
- Self-motivated to conduct original research and choose appropriate methodologies to solve advanced problems

Everything we do at Peltarion strives to enable true democratization and operationalization of AI. We expect that you will be committed to help us put AI to use to improve the lives of many.

We believe that diversity of skills, experience and personality leads to a better environment for our employees. If you are humble, considerate and a fun person to work with, like to work in a team setup and have a sense of humor, then you should take the opportunity to be part of this awesome journey and join our incredible team! 

Last day of application Friday August 23rd.

Line Thomson, People Operations @ Peltarion

Apply for this job

Or, know someone who would be a perfect fit? Let them know!



Holländargatan 17
111 60 Stockholm Directions

Already working at Peltarion?

Let’s recruit together and find your next colleague.

  • Kristin Widell
  • Peder Ribbing
  • Rebecka Hogevall
  • Rolf Kusch

Applicant tracking system by Teamtailor