- Participate in the design and development of Big Data analytical applications from product vision to implementation
- Design and continuous enhancement of project code base, continuous integration pipeline, etc.
- Investigation and resolution of performance and stability issues in production systems
- Work inside the team of software and DevOps engineers
- Collaboration with globally distributed team, corporate and customer IT services
Requirements:
- Strong knowledge of Java (collections, multi-threading, JVM memory model, etc.)
- Experience with data mining and stream processing technologies (Apache Spark, Storm, Hadoop MR)
- Experience with version control systems: Git, Subversion
- Understanding of general OOP and functional programming concepts
- Desire and ability to quickly learn new tools and technologies
- Good communication skills and use of industry terminology in English
What would be a plus:
- Experience in scripting on Bash and any of Ruby, Python, Perl
- Network protocols knowledge (TCP/IP, SSH, HTTP & etc)
- General knowledge of Linux kernel and hardware architecture
- Knowledge of public clouds (Amazon, Google CE or other)
- Monitoring systems (Ganglia, Graphite, Zabbix)
- CI servers configuration (Hudson/Jenkins, Cruise Control)
- Programming experience with Scala
What we offer:
- Competitive salary
- Work on bleeding-edge projects within grid computing and distributed computing with a highly motivated and dedicated team of developers and testers
- Flexible schedule
- Medical insurance, benefits program, attractive general compensation package