OUR SECTORS
At European Tech Recruit, our sectors cover a wide range of industries within the field of technology.
tech jobs in the US?
Looking for
tech jobs in the US?
At European Recruitment, our sectors cover a wide range of industries within the field of technology
At European Recruitment, our sectors cover a wide
range of industries within the field of technology
At European Recruitment, our sectors cover a wide
range of industries within the field of technology
Client services
Learn about the range of client services we offer at European Tech Recruit, and browse through our case sudies.
tech jobs in the US?
Looking for
tech jobs in the US?
At European Recruitment, our sectors cover a wide range of industries within the field of technology
About us
Learn about European Tech Recruit's mission, values, our team, and our commitment to DE&I.
tech jobs in the US?
Looking for
tech jobs in the US?
At European Recruitment, our sectors cover a wide range of industries within the field of technology
DataOps & MLOps Engineer
Position Overview
We’re looking for a DataOps & MLOps Engineer to build the infrastructure that powers our data and ML workflows. You’ll focus on data storage and movement, dataset versioning, ML pipeline automation, experiment tracking, and ensuring reproducibility across our 3D reconstruction and training workloads.
Main Responsibilities
- Design and manage data storage systems for large datasets (multi-TB image data, 3D assets, training data)
- Build efficient data access patterns and movement strategies for distributed training and experimentation
- Implement dataset versioning and lineage tracking for reproducibility
- Set up and maintain experiment tracking and model registry infrastructure (MLflow, Weights & Biases)
- Build ML pipelines for data preprocessing, training, validation, and model registration (Kubeflow, Airflow, Prefect)
- Support distributed training workflows across multi-GPU clusters (PyTorch Distributed, Horovod, Ray)
- Profile and optimize training pipelines: data loading bottlenecks, batch sizing, GPU memory utilization
- Ensure reproducibility of experiments: environment pinning, data versioning, artifact management
- Manage artifact storage and distribution (Docker registries, model registries, package repositories)
- Build tooling to improve developer productivity for ML workflows
Qualifications
- Strong Linux knowledge
- Experience with data storage systems and large file handling (object storage, NFS, distributed filesystems)
- Knowledge of dataset versioning tools (DVC, Delta Lake, or similar)
- Experience with ML pipeline orchestration (Airflow, Prefect, Kubeflow)
- Familiarity with experiment tracking tools (MLflow, Weights & Biases, Neptune)
- Understanding of distributed training frameworks and patterns
- Experience with containerization (Docker) and CI/CD pipelines
- Knowledge of Python dependency and environment management
Nice to Have
- Experience with model registries and deployment workflows
- Familiarity with data quality validation frameworks
- Knowledge of 3D graphics processing or computer vision workflows
Apply Now
By applying to this role, you acknowledge that we may collect, store, and process your personal data on our systems.
For more information, please refer to our
Privacy
Notice