Big Data Architect in Durham, NC at Consultis

Date Posted: 6/12/2018

Job Snapshot

Job Description

Big Data Architect 
Design, implement, and maintain the organization's application systems and/or IT infrastructure. 
Provide an architectural framework for information system development, maintenance, and enhancement efforts. 
Understand user and process requirements and ensure those requirements can be achieved through high quality deliverables. 
Work closely with developers and engineers to develop road maps for applications, align development plans, and to ensure effective integration among information systems and the IT infrastructure. 
Monitor technological advancements to ensure that solutions are continuously improved, supported, and aligned with industry and company standards as well as emerging business requirements. 
Understand the interactions between systems, applications, and services within the environment, and evaluate the impact of changes or additions. 
Analyze systems and perform usability testing to ensure performance and reliability, enhance scalability, and meet security requirements. 
Career-Level position within field. 
Requires experience and proficiency in discipline. 
Conducts complex work important to the organization. 
Works with minimal supervision with wide latitude for independent judgment. 
Typically requires six to nine years experience or equivalent education. 

Role Specific Requirements: 

• Hands on experience leading large-scale global data warehousing and analytics projects. 
• Demonstrated industry leadership in the fields of database, data warehousing or data sciences. 
• Real time streaming technologies and time series with tools such as Spark, Flink, Samza etc. 
• Hadoop Big Data knowledge – Hive metastore; storage partitioning schemes on S3 and HDFS 
• ETL – understanding and custom coding 
• NoSQL understanding and use case application – Cassandra, HBase, DyanoDB 
• Understanding and use cases application of columnar data stores 
• Caching and queueing technologies – Kafka/Kinesis, Rabbitmq/SQS, Redis/Memcache etc. 
• RDBMS skills – SQL, optimization techniques, etc. 
• Scripting/Programming skills – Python, Java, Scala, Go 
• Excellent understanding of operating systems including troubleshooting 
• Data warehousing knowledge, data science, machine learning 
• Experience in designing big data lake/warehouse for data integration from enterprise wide applications/systems 
• Experience with various ingestion patterns for large data sets Experience with Open Source and NoSQL technologies (e.g. MongoDB, Redis) at an Enterprise level 
• Lambda architecture for ingestion including real-time streaming technologies (Kafka, Storm, Spark Streaming) 
• Knowledge and understanding of ETL design and data processing mechanisms 
• Knowledge about data replication, data masking, and performance factors, etc. 
• Ability to formulate solution to handle a new situation presented related to integrating various disparate systems to Warehouse


  1. Architect Jobs
  2. Systems Engineer Jobs