Adrian Sandulescu, Developer in Bucharest, Romania
Adrian is available for hire
Hire Adrian

Adrian Sandulescu

Verified Expert  in Engineering

Software Developer

Location
Bucharest, Romania
Toptal Member Since
July 29, 2019

拥有7年基于微服务的近pb级大数据应用的工作经验, Adrian has extensive experience in automating, monitoring, and deploying complex microservice architectures.

Availability

Part-time

Preferred Environment

Ruby, Bash, Git, Emacs, Ubuntu

The most amazing...

...我已经使用像Apache Druid这样的尖端技术自动化处理了数十亿的日常事件, Flink, and Kafka.

Work Experience

Devops Engineer

2012 - PRESENT
Adswizz
  • 自动化和部署Lambda和Kappa大数据分析管道.
  • Automated and deployed countless microservices.
  • 通过识别低效率和实施新技术来降低运营成本.
  • Designed and wrote in house Puppet modules.
  • 使用对流层和Sceptre的结构化云形成模板部署方案.
  • Established Puppet deployment flow and external module structure.
  • Introduced and implemented Kubernetes.
  • 为部署引入并实现了不可变的基础设施范例.
  • 引入并实施Spinnaker,实现行业领先的部署自动化.
  • Introduced and implemented Prometheus.
  • Developed CI/CD pipelines using Jenkins and Concourse.
  • Dockerised applications.
  • Migrated applications to Kubernetes.
Technologies: Apache ZooKeeper, Amazon EKS, Concourse CI, lightpd, AWS CloudTrail, Kubernetes Operations (kOps), HAProxy, AWS Elastic Beanstalk, Amazon Virtual Private Cloud (VPC), Immutable Infrastructure, Sceptre, AWS CLI, AWS Auto Scaling, AWS ELB, Amazon CloudFront CDN, Amazon DynamoDB, Apache2, Apache Tomcat, Amazon Elastic Container Registry (ECR), Redshift, Apache Flink, AWS Lambda, Amazon Kinesis, AWS CodeCommit, Amazon ElastiCache, AWS CodeDeploy, Amazon Simple Email Service (SES), MySQL, Amazon CloudWatch, Amazon Elastic MapReduce (EMR), Amazon Glacier, Amazon Route 53, AWS Cloud Computing Services, Amazon S3 (AWS S3), Amazon EC2, AWS IAM, AWS CloudFormation, Amazon EBS, Elasticsearch, MongoDB, HBase, Druid.io, Apache Kafka, Hadoop, Flink, Jenkins, Spinnaker, Kubernetes, Docker, Prometheus, ELK (Elastic Stack), Grafana, Graphite, Nagios, Terraform, Troposphere, Puppet, Packer, Bash, Go, Python, Ruby, Amazon Web Services (AWS)

Sysadmin

2010 - 2012
Horia Hulubei National Institute of Physics and Nuclear Engineering
  • Maintained department web and email servers.
  • Deployed and maintained bare metal grid computing infrastructure.
  • 为各种科学软件套件部署和提供支持.
  • Deployed and provided support for personal user workstations.
  • Provided hardware support for servers, workstations, and printers.
技术:Fedora, CentOS, Ubuntu, Sendmail, Apache, Nagios, Puppet, Bash

Spot Fleet with EBS Reattach for Druid.io

在处理一个Kappa体系结构分析管道时,我遇到了数据库节点非常昂贵的问题.
显著降低成本的唯一解决方案是使用AWS云备用容量(SPOT)。, made available at a much lower cost,
but with an extremely high risk of losing the virtual machines. Losing the VMs could sometimes happen even multiple times per day.

丢失的虚拟机可以被替换,但这仍然意味着任何新的虚拟机都必须在几个小时内从云存储(S3)重新加载数据,才能完全正常运行.
During this time, 数据库将失去复制,并且很容易宕机, a risk we couldn't take.

In order to solve this, 我使用了一个脚本,允许新启动的虚拟机重用任何丢失的虚拟机留下的虚拟硬盘, 这样一来,它们一推出就可以使用.

Introduced Spinnaker

在使用部署在数百个节点上的大容量Tomcat应用程序时,我遇到了部署过程太慢和太耗时的问题(由于太复杂而不能完全自动化)。.

部署过程包括手动金丝雀步骤(其中只有一个服务器将被更新并监视等待批准的错误)和滚动更新步骤(其中整个舰队将一次更新几个vm)。.

由于需要更新的虚拟机数量众多,整个部署可能需要30分钟,但更糟糕的是,回滚也可能需要30分钟.

In order to improve this, I implemented Spinnaker, 这是一个工具,它允许将所有部署步骤(包括canary和涉众手动批准)自动化为部署管道,还允许运行红/黑部署.

在红/黑部署中,发放了一组全新的虚拟机,并将流量路由到它们. 为了回滚流量,只需要将其路由回使用以前的应用程序版本部署的vm,这意味着回滚现在可以在几秒钟内执行,而不是几十分钟,从而最大限度地减少部署失败的影响.

Implemented Kube2Iam in Kubernetes

在处理部署在Kubernetes中需要访问AWS云服务的应用程序时,我遇到了一个问题,即使用动态生成的凭据(以确保适当的轮换),同时还要确保应用程序不能使用彼此的凭据.

虽然AWS通过为每个VM分配角色来简化动态凭据的使用, 这意味着在VM上运行的所有pod将具有相同的访问策略.

In order to solve this issue, 我采用了一个名为Kube2Iam的应用程序,它代理对AWS凭证服务器的访问,并允许为VM上运行的每个pod配置单独的访问策略.

Kafka Virtual Hard Disk Performance Optimization

在AWS中使用Kafka时,我遇到了在Kafka虚拟机上使用吞吐量优化的虚拟硬盘的问题.

磁盘的最大吞吐量比宣传的要低几倍, 这意味着如果我们切换到使用ssd,将会有性能损失和成本损失.

通过深入研究文档,我发现只有在写大小至少为1MB的情况下才能保证所宣传的吞吐量.

Kafka is optimized specifically for efficient, 大量顺序写入,因此这很可能是内核配置问题.

通过在线搜索更多信息,我发现我们使用的操作系统的最大写入大小被限制为256KB,并且需要更改引导参数,以便在引导后增加写入大小. 使用所需的更改创建了一个新的VM映像,从而解决了问题.

Introduced the Immutable Infrastructure Paradigm

在广泛使用Puppet进行自动化服务器配置时,我遇到了新服务器需要很长时间才能配置的问题,因为每个新服务器都需要安装和配置所有必需的包.

In order to tackle this limitation, 我开创并鼓励这种使用不可变基础设施的转变, using Packer to create VM images.

由于新服务器现在是从预配置的映像中供应的,因此启动它们的速度会快几倍.

作为一个额外的好处,配置漂移将不再可能.

Languages

Bash, Ruby, Python, Go

Tools

AWS ELB, AWS CLI, Amazon EBS, AWS CloudFormation, AWS IAM, Puppet, Amazon Elastic MapReduce (EMR), Amazon Elastic Container Registry (ECR), Amazon Simple Email Service (SES), AWS CodeDeploy, Amazon ElastiCache, AWS CodeCommit, Amazon CloudWatch, Apache ZooKeeper, Packer, Nagios, Grafana, ELK (Elastic Stack), Amazon Virtual Private Cloud (VPC), Emacs, Git, Flink, Apache, Sendmail, AWS CloudTrail, Apache Tomcat, lighttpd, Terraform, Jenkins, Concourse CI, Amazon EKS, Amazon CloudFront CDN

Platforms

Amazon EC2, AWS Cloud Computing Services, AWS Lambda, AWS Elastic Beanstalk, Apache Flink, Apache Kafka, Docker, Amazon Web Services (AWS), Ubuntu, CentOS, Fedora, Apache2, Kubernetes, Spinnaker

Storage

Amazon S3 (AWS S3), Redshift, Druid.io, HBase, MongoDB, MySQL, Amazon DynamoDB, Elasticsearch

Other

AWS Auto Scaling, Amazon Kinesis, Amazon Glacier, Amazon Route 53, Sceptre, Troposphere, Graphite, Immutable Infrastructure, HAProxy, Prometheus, Kubernetes Operations (kOps)

Frameworks

Hadoop

2008 - 2010

Master's Degree in Investment Management

Academy of Economic Studies - Bucharest, Romania

2005 - 2008

Bachelor's Degree in Business Administration

Academy of Economic Studies - Bucharest, Romania