Salary : 15,500 - 19000
Salary Range : $120,000 - $180,000 a year
#LI-SP1
Must
have experience on data bricks which consists of Delta lake, Unity Catalog,
Databricks workflows orchestration, Security management, Platform governance,
Data Security.
Must have knowledge of new features available in Databricks and its implications along with various possible use-case.
Must have followed various architectural principles to design best suited per problem.
Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments.
Must have strong understanding of Data warehousing and various governance and security standards around Databricks.
Must have knowledge of cluster optimization and its integration with various cloud services.
Must have good understanding to create complex data pipeline.
Must be strong in SQL and pyspark-sql.
Must have worked on designing both Batch and streaming data pipeline
· Ability to design and implement scalable and maintainable data architectures
· Deep understanding of data warehousing, ETL processes, and data modeling principles
· Engage in code reviews and provide feedback to improve the quality and efficiency
· Experience with big data technologies (e.g., Hadoop, Spark)
· Experience with cloud-based data platforms (e.g., AWS, Azure, GCP)
· Expertise in designing and implementing data pipelines, including ETL and ELT methodologies
· Knowledge of data governance frameworks, metadata management, and data security
· Proficiency in data modeling tools and techniques (e.g., ER diagrams, data dictionaries)
· Strong communication and collaboration skills, with experience interacting with business stakeholders
·
Strong knowledge
of relational and non-relational database systems (e.g., SQL, NoSQL)