公司简介
 
                                -Architect, design, and develop large-scale data pipelines and lakehouse solutions using GCP-native services (BigQuery, Dataflow, Dataproc, Pub/Sub, Composer, Cloud Storage).
-Modernize legacy systems and lead end-to-end data migration to GCP, ensuring compliance with governance and security standards.
-Define enterprise-wide data architecture frameworks, modeling standards, and best practices for structured and unstructured data.
-Implement real-time and batch data processing pipelines integrated with analytical and AI workloads.
-Lead DataOps initiatives including CI/CD, automation, orchestration, and monitoring frameworks.
-Partner with business stakeholders to design data solutions aligned with organizational strategy and cost efficiency.
-Mentor development teams and promote cross-functional learning through workshops and technical documentation.
-Minimum 10–12 years total experience in data engineering or data architecture, with at least 4–5 years in GCP-focused projects.
-Proven experience designing and developing ETL/ELT pipelines, streaming data solutions, and data lake architectures.
-Strong programming skills in SQL, Python, and Terraform (for Infrastructure as Code).
-Demonstrated ability to integrate structured (e.g., relational, columnar) and unstructured data (e.g., JSON, logs, media) sources into unified data architectures.
-Leadership experience managing data teams, cloud transformations, and cross-departmental migration projects.
-Experience implementing automated monitoring, data lineage, and cost governance frameworks on GCP.
-Strong understanding of data modeling, partitioning, and schema design for analytical workloads.
-Excellent problem-solving and analytical thinking.