Remote (Georgia, Türkiye, Armenia, Serbia, Kazakhstan, Poland, Montenegro).
LEVEL: Middle and Senior.
SALARY: is discussed individually.
This company is a US company with a global customer base and globally distributed remote workforce. It is best known for the world’s most powerful time series database and analytics engine dedicated to financial applications. The company is a major supplier of market data for research and trading, and its regulatory products are used by the world’s largest exchange group, the world’s largest market-maker, the world’s largest options trader, and by regulators, banks, and brokerage firms around the world.
Colleagues are looking for a specialist to enhance their expertise in the DevOps engineers team, which develops the infrastructure behind the hosted solutions and their software and data delivery lifecycle. Inside, the company have two main directions - Cloud and Solutions. We're looking for specialists in Cloud.
In Cloud Project, the company has a multi-account AWS infrastructure. They have been providing their customers with different kinds of setups. The company uses a wide range of AWS resources on top of common services like EC2, EKS, S3, VPC, and ELB. Most of the AWS infrastructure is covered by IaC. CI/CD is running on GitLab.
The company has more than 1.5 petabytes of data in s3 and EFS. Some of the data is exposed in S3 using Storage Gateways. Also, the company is transitioning toward Kubernetes as opposed to virtual machines, centralized logging and monitoring, migration of data loading process to airflow, and infrastructure optimizations to increase performance and reduce costs.
The project uses the following technical stack:
- AWS (some of the services they use are: EKS, EC2, S3, SGW, ASG, ALB, Lambda, etc.).
- Terraform and Ansible as an IaC approach.
- Python and Bash for programming and scripting purposes.
- Docker (docker-compose, docker airflow plugin) for containerization.
- Kubernetes (mostly EKS, but GKE and other Kubernetes engines are also being used) for Orchestration and Helm for its management.
- Prometheus, Grafana, AWS CloudWatch, and CloudTrail, Loki for monitoring, logging, and some statistics collection.
- Airflow for product data processing and MLFlow for Machine Learning.
- Jupyter and JupyterHub for data analysis.
What you will do:
- Improving and extending AWS infrastructure.
- Implementing Kubernetes, creating and upgrading Helm charts for different services.
- Implementing and improving CI/CD pipelines for environments, expanding the capabilities of our release mechanism.
- Improving logging and monitoring systems for the existing infrastructure and services.
- Upgrade the current infrastructure with new ideas to reduce costs and increase performance.
- Create architecture from scratch for large PaaS customers minimizing the need for support.
- Helping the developers to simplify and customize the development process.
- Automation of Cloud processes using Terraform and Ansible.
- Helping with troubleshooting in case of AWS-related issues.
- Linux experience (Tomcat, NGINX, Apache Web server, etc.).
- Experience with general resources of AWS (the optional requirement for specialists of the Middle level).
- CI/CD (any tools).
- Experience in working with network tools and protocols.
- Docker, Kubernetes.
- Experience with Bash + Python (it is not necessary to be proficient, but understanding is beneficial).
- Experience with logging and monitoring tools and knowledge of Terraform, Ansible, Packer and other IaC tools will be a plus.
- English - Upper-Intermediate or higher.
The company offers:
In turn, the company offers a competitive compensation and benefits package, which is discussed individually with each candidate based on their expertise. Colleagues offer fully remote work. You will have the opportunity to solve complex problems and influence the quality of products used by thousands of people around the world with your decisions.