Job Title: Hadoop Developer- Data Warehouse (51315-CRACER-IO45-E-ARI)
Location: West Chester, PA
Duration: 12+ month contract
Emp.Type: W2/1099
Role:
The Hadoop Developer provides code/design analysis and strategy, supports project planning, develops code and designs for complex projects. The Developer should be able to work independently and deliver the assignments with minimum supervision . Also, the Developer serves as a technical expert for his area of code and should have good communication Skill to interact with Customers
Other responsibilities:
Critique and evaluate detailed business, functional, and high-level technical requirements (including recovery, security and audit).
Maintain component design standards.
Design solutions for high-complexity projects.
Ensure that design reviews are scheduled and executed. Provide feedback and recommends solutions. Ensure that design standards and documentation are followed.
Assist with detailed project estimating and milestone planning. Review and validate accuracy of others estimates and works with project managers in continuous process improvement for estimating.
Contribute to determining programming approach, tools, and techniques that best meet the business requirements. Promote and define development standards.
Perform coding of complex modules, as needed.
Define and manage process by which support and technical assistance is performed.
Performs root cause analysis to prevent recurrence of problem and manages the resolution of complex problems.
Ensure delivery of change management activities supporting production deployments to Developers, Quality Control Analysts, and Environment Management personnel Review application configuration.
Qualification:
Minimum of 2 year of hand-on experience programming on Hadoop
Minimum of 8 years systems development and implementation experience
Should have thorough knowledge of : HIVE,HDFS,Accumulo,Pig,Kafka,Storm
Should have strong Java skills
Must Have worked with In memory data Grid - Apache ignite
Experience in Hortonworks implementation preferred.
Experience with the following tasks in the context of Hadoop:
Implementation of ETL applications
Application/implementation of custom analytics
Data migration from existing data stores
Developing capacity plans for new and existing systems
Bachelor's degree in Computer Science, Engineering or Technical Science
Good to Have:
Experience with Agile Scrum Model
Rally Experience
Teradata,Oracle and SQL server Experience.
Experience with ETL tools Informatica ,pentaho
To Apply, Please click here: APPLY NOW
No comments:
Post a Comment