Advance Search

Browse Jobs

Senior Data Infrastructure Engineer

Posted 25 days ago

  • London, Greater London
  • Any
  • External
  • Expired - 2 months ago
Responsibilities
As a Senior Data Infrastructure Engineer you will be responsible for deploying, scaling and maintenance of services at the core of Workato Data Platform such as analytical data storages, real time ingestion services, data lakes, orchestration tools. You will closely work with Data Engineers, and Developers as a part of a small, flexible team and will have a direct impact on the process of modernisation and maturation of the platform including infrastructure architecture decisions.
Workato Data Platform is based on the industry leading technologies such as Snowflake Data Warehouse, Clickhouse Database, Airflow, Apache Kafka and AWS (RDS, Lambda, Fargate, etc), Prometheus, VictoriaMetrics and Workato SAAS solution itself for collecting data from third party business supporting services.We are currently working on upgrading the platform to meet new requirements of rapidly growing business such as:
Increase the confidence in analytical data.
Make a clearest possible vision of the user journey.
Minimize the gap between data emitting and data availability for analytical purposes.
Support for a fast growing amount of data.
We plan to achieve this by adopting some additional leading edge technologies such as DBT, Trino, Kafka Streams, Kafka Connect and DataHub unified metadata management platform. Thus, you will work with the technologies that are relevant in the industry and will have challenging tasks.
Requirements
Qualifications / Experience / Technical Skills
8+ years of trackable work experience with deploying and supporting data-intensive services.
Production experience with building deployments of services commonly used as a part of data platform stack such as Kafka, Debezium, Airflow, Trino/Presto, Spark, Flink, Clickhouse, S3, Firehose, Kinesis, Snowflake, Big Query, Red Shift.
Experience monitoring, logging, and analyzing service health. Ability to troubleshoot common bottlenecks of data intensive applications.
Experience with managing complex infrastructure (such as Kubernetes clusters, VPC networking and security policies) using Infrastructure as Code tools (e.g. Terraform or CloudFormation).
Experience creating application deployments of Kubernetes-based services using tools like Kustomize, Helm, etc
Experience with AWS cloud computing (EC2, RDS, EKS, EMR, Route53, VPCs, Subnets, Route Tables).
Basic knowledge of one or more high-level programming languages, such as Python, Go, Java.
Experience with a solution cost optimization and capacity planning.
Good understanding of Data Privacy and Security (GDPR, CCPA).
Soft Skills / Personal Characteristics
Good communication and collaboration skills.
Exposure or interest working with Data pipeline technologies.
Readiness to work remotely with teams distributed across the world and timezones.
Spoken English (at the level enough to pass technical interviews and later work with colleagues)
#J-18808-Ljbffr
Apply