Über das Unternehmen
GROPYUS schafft durch modulare Bauweise nachhaltige und erschwingliche Gebäude und setzt einen neuen Standard für smartes Wohnen.
About The Company
GROPYUS creates sustainable, affordable, and aspirational buildings for everyone through modular construction and setting a new standard in smart living.
About The Role
We are growing our Data Language Team within our Digital Twin department, which by creating a semantic Digital Twin will enable hyper-automation of buildings as products. By creating a Gropyus-wide language for data using semantic modelling we make data interoperable across domains, and render it machine readable.
The Data Language team interacts with experts from various domains such as Sustainability, AI, IoT, smart factory, construction engineers, building architects, logistics experts, software engineering, and solve complex challenges pertaining to end-to-end interoperability of data throughout the lifecycle of a building and all its associated processes.
As part of our Data Language team you will:
- Be involved in designing data- and information-models by formalizing concepts from various domains using state of the art approaches.
- Integrate data processing pipelines for IoT, smart Factory and Building Systems.
- Interoperate data ingestion with knowledge graph models.Implement data governance and security requirements.
- Work on state of the art knowledge engineering data backend with the team.
- Develop and suggest new ideas to address problems in algorithmic and modelling domain.
- You are experienced with one of the following languages: Python, Scala, Java, or generally JVM languages.
- You are experienced with Semantic Web Technology, ontology engineering and knowledge graphs.
- You are experienced in using RDFS and OWL as data modelling languages for describing RDF dataYou have experience with graph query languages, such as SPARQL or alternatively Cypher, Gremlin.
- You are familiar with graph validation and rule evaluation tools such as SHACL, SWRL or other and can explain conceptsYou have experience working with graph databases such as AWS Neptune, RDF4J, Neo4j, Blazegraph, GraphDB, Apache Jena or similar (usage or backend experience is a strong plus).
- You have strong analytic skills and a transparent and collaborative communication style
- Knowledge in building data intensive reactive processing pipelines and know how to scale these.
- Experience in setting up IoT information systems
- Experience working with industrial scale data and / or complex and large datasets
- Knowledge of using ontology design tools, such as Protégé
- Experience with Azure (data solutions) and FaaS / Serverless or knowledge of these concepts from other cloud providers (GCP or AWS) is a strong plus