Job
description
You are passionate about data: making it clean, understandable, useful, and reusable. You work closely with clients, product owners, and developers to turn messy or complex data into high-quality structured data that power applications, APIs, and analytics. You take ownership of data workflows and help define how data should be documented, integrated, and governed. You also contribute to client applications to ensure data models are correctly and consistently implemented.
Who are we
looking for?
Experience with data modeling and semantic technologies goes a long way. An eagerness to learn and improve data quality goes even further. We value practical problem-solving skills and a genuine interest in making data work for real-world applications.
We look for someone who:
-
>Enjoys exploring and solving data puzzles with practical, reusable solutions.
-
>Has experience with data modeling, semantic technologies, and data transformation.
-
>Likes working with open standards and open source tools.
-
>Is detail oriented, communicative and eager to improve data quality.
-
>Wants to learn how to build applications that work with linked data, and is willing to invest time in learning semantic software development.
Speaking Dutch is a big plus, the majority of our customers and a big part of the business domain are in Flanders. For practical reasons you should be based in Belgium and have a European nationality or have a European work permit.
What you'll do
You work on and/or understand application code to directly improve data modeling decisions and support long-term maintainability. You will regularly balance ideal semantic models with real-world constraints such as legacy data, timelines, and client needs — and help make those trade-offs explicit.
-
>Clean, normalize, transform diverse data sources and integrate them into our systems.
-
>
Design and maintain data models, ontologies, vocabularies and schemas.
-
>
Define and document data standards, and mappings for internal and client use.
-
>
Partner with developers to build models which are used in practice.
-
>
Build data validation, profiling and quality checks across pipelines.
-
>Support semantic interoperability and linked data best practices.
-
> Communicate complex data concepts clearly to technical and non-technical audiences.
Some technologies and standards we use are:
- • Git, Linux, Python, Docker, Openlink Virtuoso Open source, the coffee corner
- • SPARQL, SHACL, RML, OWL, RDF, the semantic web.