Senior DevOps – AI, Geospatial & HPC Infrastructure

About

🌱 Symbiose is a venture-backed deeptech company at the crossroads of space-tech and nature-tech, pioneering the use of AI, remote sensing, and High-Performance Computing (HPC) to transform how forests are monitored, managed, and valued.

By fusing diverse Earth Observation data, from hyperspectral, LiDAR, radar, and optical imagery, Symbiose delivers precise, real-time insights into forest health, carbon dynamics, and climate resilience. Our AI-driven platform empowers forest owners, investors, and insurers with actionable intelligence to accelerate FSC certification, enhance transparency, and drive a nature-positive economy.

Our mission is simple but ambitious:
👉 Make forest data actionable to protect biodiversity enhance carbon capture, and make sustainable forestry the new norm.

🚀 Why join us?

At Symbiose, you’ll join a passionate team working at the intersection of AI for good and climate innovation.
You’ll have the freedom to experiment, the space to grow, and a direct impact on how forests are managed globally.

🌍 Purpose-driven mission: your work contributes directly to forest conservation and climate resilience.

💡 Deep tech + impact: applied AI, geospatial analytics, and environmental modeling.

🤝 Collaborative culture: open-minded, curious, and committed to real-world change.

🪴 Based at Station F: Europe’s largest startup campus, surrounded by top innovators.

🔎 What we’re looking for

We’re always on the lookout for bold thinkers and builders, people who thrive at the frontier of technology and ecology.
Whether your background is in machine learning, remote sensing, data engineering, forestry, or business development, if you care about the planet and cutting-edge innovation, you’ll fit right in.

Job Description

Role Overview

At Symbiose, we build the infrastructure layer to understand and value forests at scale using Earth Observation, AI, and High-Performance Computing (HPC).

Our platform processes large-scale satellite (Sentinel, LiDAR), geospatial, and climate datasets to produce forest growth models, biomass estimations, and climate risk indicators.

We are looking for a Senior DevOps / Platform Engineer to take ownership of the infrastructure powering these systems — across cloud, data pipelines, backend services, and HPC workloads.

You will operate a data-intensive, geospatial, and compute-heavy platform in production. This is a unique opportunity to work with state-of-the-art stack.

Preferred Experience

Key Responsibilities

Infrastructure & Cloud

  • Own and operate infrastructure across AWS and Azure

  • Design, deploy, and maintain production-grade systems

  • Manage Infrastructure as Code (Terraform)

  • Ensure security, IAM, networking, and cost control

Backend & Platform Support

  • Support production environments across Python, Node.js, and GraphQL services

  • Ensure reliability of APIs and backend systems

  • Handle asynchronous workloads and batch processing systems

Data & Geospatial Infrastructure

  • Design and optimize data pipelines (EO, LiDAR, climate datasets)

  • Maintain data lake architectures (S3, Parquet, GeoParquet)

  • Optimize PostgreSQL / PostGIS performance

  • Support geospatial and raster-heavy workflows (GeoTIFF, COG, MBTiles/PMTiles)

MLOps & Compute Systems

  • Enable deployment and scaling of ML models (Python, PyTorch)

  • Support training and inference pipelines

  • Contribute to model lifecycle and monitoring

HPC & Distributed Workloads

  • Operate and optimize HPC / distributed compute environments

  • Handle job orchestration, scheduling, and parallel workloads

  • Optimize performance, resource allocation, and compute efficiency

  • Conduct benchmarking and scaling analysis

Observability & Reliability

  • Implement monitoring, logging, and alerting (Prometheus, Grafana)

  • Improve fault tolerance and incident response

  • Ensure reliability of long-running data and compute jobs


Qualifications

  • 5+ years in DevOps / Platform Engineering / Cloud Infrastructure

  • Strong experience with:

    • AWS and/or Azure (compute, storage, networking)
    • Docker (required) and containerized environments
    • Terraform (or strong IaC experience)
    • CI/CD pipelines (GitLab CI preferred)
    • Linux systems and scripting
  • Experience supporting data-intensive systems or pipelines

  • Comfortable working with Python and Node.js environments

  • Ability to work on or quickly adapt to HPC and distributed systems

  • Strong ownership mindset and autonomy


Strong Pluses

  • Experience with PostgreSQL / PostGIS

  • Exposure to geospatial systems or EO data

  • Experience with:

    • Airflow / workflow orchestration
    • Spark / PySpark
    • MLflow or model tracking systems
    • Ray or distributed compute frameworks
  • Experience optimizing compute-heavy or batch workloads

  • Familiarity with GeoParquet, COG, PMTiles, or large raster pipelines

Recruitment Process

Hiring Process

Step 1: 30-minute screening call

Step 2: Technical questions (async)

Step 3: Deep-dive discussion (architecture & systems)

Step 4: Fast onboarding 🚀

Process can be completed within 1-2 weeks

Additional Information

  • Contract Type: Full-Time
  • Location: Paris
  • Possible partial remote