AWS DevOps, DataOps, and Cloud Engineering Consultant

See more jobs from Hitachi Vantara

about 4 years old

This job is no longer active

The Company

Hitachi Vantara, a wholly owned subsidiary of Hitachi, Ltd., helps data-driven leaders use the value in their data to innovate intelligently and reach outcomes that matter for business and society – what we call a double bottom line. Only Hitachi Vantara combines 100+ years of experience in operational technology (OT) and 60+ years in IT to unlock the power of data from your business, your people and your machines. We help enterprises store, enrich, activate and monetize data for better customer experiences, new revenue streams and lower business costs.

47Lining, a part of Hitachi Vantara is an AWS Premier Consulting Partner with Big Data and Machine Learning Competency designations. We develop big data solutions and deliver big data managed services built from underlying AWS building blocks like Amazon Redshift, Kinesis, S3, DynamoDB, Machine Learning and Elastic MapReduce. We help customers build, operate and manage breathtaking “Data Machines” for their data-driven businesses. We architect solutions that address traditional data warehousing, Internet-of-Things analytics back-ends, predictive analytics and machine learning to open up new business opportunities. Our experience spans use cases in multiple industries including industrial, manufacturing, oil & gas, energy, life sciences, gaming, retail analytics, financial services and media & entertainment.

The Role

We are seeking experienced and versatile AWS Software Engineering Consultants to develop data and analytics services and solutions in AWS. The ideal contributor will work with a skilled team to develop, deploy, and operate enterprise-grade data platform services and infrastructure on the AWS platform, spanning ingest of IoT and other enterprise data sources, Big Data, machine learning and predictive analytics.

47Lining is growing at an exponential rate and can only grow as quickly as we are able to find the right talent. We pay extremely competitive salaries, bonuses, free training and conference attending budget, flexible hours and we have a genuinely extremely talented team.

Responsibilities

  • Hands-on experience developing advanced analytics in SQL, R, Spark or similar tools
  • Work with AWS architectures, development, and deployment
  • Build with enterprise-scale data warehouse technologies (Teradata, Oracle, SQL Data Warehouse, Vertica, Redshift)
  • Experience managing applications in AWS and familiarity with core services including EC2, S3, RDS, etc.
  • Exposure to networking & load balancing solutions
  • REST API design and development
  • Work across all layers of an application, from back-end databases through UI
  • Collaborate with development teams to deliver high-quality results
  • Use Agile / Scrum development methodology
  • System decomposition, architecture, design, and specification

Qualifications

  • Experience with deployment automation tools like Puppet, Chef, and Ansible
  • Good working knowledge of Big Data technologies such as Hadoop, Hive, Spark
  • Proficient with SQL databases and knowledge of standard methodologies
  • Strong experience with middle-tier web services development (REST APIs)
  • Strong background in commercial-grade software development with Java, Python, SQL
  • Front End Application Development (Application Views & Controller)
  • Domain Entity and Behavior Modeling, Design, and Implementation
  • 3rd-party application, service and data integration
  • Data Management, Mapping, Translation & Persistence
  • Large-scale Data Management & Workflow
  • Analytic development and automation
  • Excellent documentation habits

We are an equal opportunity employer. All applicants will be considered for employment without attention to age, race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.