cv

View the Project on GitHub artkostm/cv

EN

Artsiom Chuiko

06.08.1994, Belarus

Nowhere

https://github.com/artkostm

artkostm@gmail.com

I am a developer with more than 8 years of well-rounded experience in backend development, object-oriented design, microservices architecture, data processing, and functional programming. My skills span various fields of software engineering, including application, web, and system development. As a supportive and enthusiastic team player, I am dedicated to streamlining processes and efficiently resolving project issues. I always strive to build and improve reusable tools and solutions to achieve repeatable, high-quality results

Skill Highlights:

Expirience

Sep - Dec, 2019 – Big Data Team

Project Roles: Developer, Tech Lead

Developing Data Platform for a customer on Azure.

Led the development of a Data Lake on Azure, enhancing data pipelines, modeling, spearheaded the transition from classic Data Lake to Lake House architecture, focusing on data quality and governance.

Tools: Azure, Databricks, Azure Data Factory, Azure Batch, APIM, Event Hub, Kafka, NiFi, Spark, Log Analytics, Power BI, Sharepoint, Scala, Java, SQL, Python


Jul - Aug, 2019 – Big Data Team

Project Roles: Developer

New product analysis and predictions. Customer wanted high-profile Data Since work, but ended up putting the team into an archetecture/consultancy kind of way, so the project was effectively paused for lack of environment and data.

Tools: AWS, Glue, Elasticsearch, Kibana, Lambda, Java, Python


Jan - Jul, 2019 – Big Data Team

Project Roles: Developer

The project re-thought how our customer allocated inventory to its stores. By exploring algorithmic techniques, we developed demand forecasting algorithms, and applyed business rules to those forecasts to deliver improved store-level allocations that consider sales forecasts and regional effects + ecommerce allocation optimization.

Ensured data quality and optimized cloud infrastructure using Airflow, GCP, Hive, and Python

Tools: Airflow, GCP, Hive, Terraform, Python, Pyspark, Pandas


Oct, 2018 - Jan, 2019 – Big Data Team

Project Roles: Developer

Implemented “discounts on the fly” processing over 1Tb of data within an hour using Spark structured streaming. Conducted Spark and Kafka tuning and troubleshooting, ensuring optimal performance and reliability.

Tools: Spark, Kafka, Scala, Sbt, S3, HDFS, Jenkins, Qubole


Apr, 2017 - Oct, 2018 – Analyst tools team

Project Roles: Developer

Implementation of web tools for the analysts, new case management platform

Tools: Spring, Angular 4/5, Gradle, Mybatis, Rabbit, Mongo, Ms Sql, Karma, QueryDsl


Jan, 2016 - Apr, 2017 – Fraud tools team

Project Roles: Developer

Implementation of an auth service (REST) with Admin UI, disputes/debit memos autoimport

Tools: Spring, Mybatis, JQuery, Gradle, RetsAssured, JUnit, Mockito, Selenium, Ms Sql


Feb, 2015 - Jan, 2016 – Fraud team

Project Roles: Developer

Implementation and support of sets of services to auto detect fraudulent transactions.

Tools: Spring, Gradle, Ms Sql, MongoDB, Hadoop, JQuery, Splunk, Rabbit, Kibana, ElasticSearch, Jenkins, Stash, Guthub


Sep, 2014 – Feb, 2015 – Data preparation to retrain fraud models

Project Roles: Developer, Tester, Ops

Using the tools below, we developed MapReduce jobs and hive-scripts for data preparation, so data scientists can retrain their fraud models.

Partisipation:

Tools: Hadoop, Hive, Gradle, JUnit, Jsch, AngularJs


Certifications

Holder of numerous valid certifications in data engineering and operations: View Certifications

Education

Outside of work

I have a keen interest in functional programming. In my free time, I love to play outdoor games, read books and blogs, and explore new technologies and features. I pay special attention to self-education. Below are some of my personal projects: