zagorulkinde's Repositories
zagorulkinde/async-http-client
Asynchronous Http and WebSocket Client library for Java
zagorulkinde/awesome-behavioral-interviews
This repository contains tips and resources to prepare for behavioral interviews.
zagorulkinde/awesome-cto
A curated and opinionated list of resources for Chief Technology Officers, with the emphasis on startups
zagorulkinde/benchmarks
Latency benchmarks for FIFO data structures
zagorulkinde/cv
zagorulkinde/Data_Mining_in_Action_2018_Spring
zagorulkinde/fakesmtp
fakesmtp
zagorulkinde/frontend-test-task
Тестовое задание на позицию Front-End разработчика в KazanExpress; TODO-приложение, возведённое в абсолют.
zagorulkinde/gigachain
⚡ Building applications with LLMs through composability ⚡
zagorulkinde/hive
Mirror of Apache Hive
zagorulkinde/latency-test-aws
zagorulkinde/LatencyUtils
Utilities for latency measurement and reporting
zagorulkinde/mlops-platforms
Compare MLOps Platforms. Breakdowns of SageMaker, VertexAI, AzureML, Dataiku, Databricks, h2o, kubeflow, mlflow...
zagorulkinde/papers-we-love
Papers from the computer science community to read and discuss.
zagorulkinde/performance
Collection of documents related tunings for performance of Java low-latency trading systems: from hardware up to application level
zagorulkinde/piggymetrics
Microservice Architecture with Spring Boot, Spring Cloud and Docker
zagorulkinde/pycld2
zagorulkinde/quarkus-quickfixj
zagorulkinde/resolver
simple python script which sends http get for each adress which resolved by nslookup and counts time
zagorulkinde/sage
SAGE: Spelling correction, corruption and evaluation for multiple languages
zagorulkinde/sampleproject
A sample project that exists for PyPUG's "Tutorial on Packaging and Distributing Projects"
zagorulkinde/stay-sharp
zagorulkinde/transfer-money
Simple rest http service (Jetty, Jersey, MyBatis, Swagger, RestEasy)
zagorulkinde/tuning_playbook
A playbook for systematically maximizing the performance of deep learning models.
zagorulkinde/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs