Senior Data Engineer - AI Context & Knowledge Systems
Remote, Remote
We are looking for a Data Engineer to build the "memory" and "knowledge" backbone of our Agentic AI ecosystem. You will be responsible for designing data pipelines that feed into our Model Context Protocol (MCP) servers, ensuring that AI agents managed via Gravitee have real-time access to accurate, secure, and contextually relevant enterprise data. Key Responsibilities
Context Engineering: Design and optimize data schemas specifically for LLM consumption, ensuring that data retrieved via MCP servers is structured to minimize token usage and maximize reasoning accuracy.
Hybrid Pipeline Development: Build robust data pipelines using Python (for AI/ML workflows) and C#/.NET (for enterprise integration) to move data from legacy systems into AI-ready formats.
Vector Database Management: Implement and maintain Vector Databases (e.g., Pinecone, Weaviate, or Milvus) to support Retrieval-Augmented Generation (RAG) alongside live API tool calls.
Data Governance for AI: Work with the Gravitee API Gateway to enforce data masking, PII redaction, and fine-grained access control before data reaches an LLM.
Metadata Orchestration: Manage the OpenAPI and MCP metadata that allows AI agents to "understand" the data they are querying.
Technical Qualifications
Languages: Expert-level Python (Pandas, PySpark, SQLAlchemy) and strong familiarity with C# for interacting with .NET-based data layers.
AI Data Stack: Hands-on experience with Vector Databases and embedding models.
API Management: Understanding of how data is exposed through Gravitee APIM and secured via MCP-specific authorization flows.
Modern Data Stack: Experience with SQL/NoSQL databases, dbt, and cloud data warehouses (Snowflake, BigQuery, or Databricks).
Protocol Knowledge: Familiarity with the Model Context Protocol (MCP) and how it standardizes data retrieval for AI agents.
Preferred Skills
Experience building Knowledge Graphs to provide relational context to AI agents.
Familiarity with semantic caching to reduce LLM costs and improve response times.