Earnbetter

Job Search Assistant

Data Engineer

Tata Consultancy Services • Newark, NJ 07175 • Posted 4 days ago via LinkedIn

Boost your interview chances in seconds

Tailored resume, cover letter, and cheat sheet

In-person • Full-time • Senior Level

Job Highlights

Using AI ⚡ to summarize the original job post

The Data Engineer at Tata Consultancy Services will be responsible for designing and implementing data storage systems using Azure services, developing data integration processes, and utilizing big data technologies to support data analytics and machine learning applications. This role involves ensuring scalability, performance, and cost-effectiveness of data storage and integration, as well as building and maintaining applications within an Agile function.

Responsibilities

  • Design and implement data storage systems using Azure services like Azure SQL Database, Azure Data Lake Storage, and Azure Synapse.
  • Ensure scalability, performance, and cost-effectiveness.
  • Develop and implement data integration processes using Azure Data Factory.
  • Extract data from various sources, transform it, and load it into data warehouses or data lakes.
  • Utilize big data technologies such as Apache Spark.
  • Create data processing workflows and pipelines to support data analytics and machine learning applications.
  • Build and maintain new and existing applications in preparation for a large-scale architectural migration within an Agile function.
  • Monitor and optimize data pipelines and database performance to ensure data processing efficiency.
  • Build interfaces for supporting evolving and new applications and accommodating new data sources and types of data.
  • Document data engineering processes, data models, and pipelines to ensure transparency and maintainability.

Qualifications

Required

  • Bachelor's degree in Computer Science or a related field.
  • 5+ years of experience in building data and analytics platform focused on Azure data and analytics solutions.
  • Expertise in Azure services such as Azure SQL Database, Azure Data Factory, Azure Synapse Analytics, and Azure Data Lake.
  • Proficiency in software development and scripting languages (e.g., Python, Pyspark).
  • Knowledge of ETL tools (e.g., SSIS, Azure Data Factory, Power BI Dataflow).
  • Database management skills (MSSQL, Azure SQL Database, Azure Data Lake Storage, and Azure Synapse).
  • Knowledge of data modeling and data warehousing concepts.
  • Experience with cloud-based data migration.
  • Strong analytical and problem-solving skills.
  • Experience working in an Agile environment with Scrum Master/Product owner.

Full Job Description

Roles & Responsibilities

Data Management and Storage:

Design and implement data storage systems using Azure services like Azure SQL Database, Azure Data Lake Storage, and Azure Synapse.

Ensure scalability, performance, and cost-effectiveness.

Data Integration and ETL (Extract, Transform, Load):

Develop and implement data integration processes using Azure Data Factory.

Extract data from various sources, transform it, and load it into data warehouses or data lakes.

Big Data and Analytics:

Utilize big data technologies such as Apache Spark.

Create data processing workflows and pipelines to support data analytics and machine learning applications.

Build and maintain new and existing applications in preparation for a large-scale architectural migration within an Agile function.

Monitor and optimize data pipelines and database performance to ensure data processing efficiency.

Build interfaces for supporting evolving and new applications and accommodating new data sources and types of data.

Document data engineering processes, data models, and pipelines to ensure transparency and maintainability.


Bachelor's degree Computer Science or a related field.

5+ of experience in building data and analytics platform focused on Azure data and analytics solutions.

Expertise in Azure services such as Azure SQL Database, Azure Data Factory, Azure Synapse Analytics, and Azure Data Lake Proficiency in:

Software development and scripting languages

ETL tools (e.g., SSIS, Azure Data Factory, Power BI Dataflow).

Database management (MSSQL, Azure SQL Database, Azure Data Lake Storage, and Azure Synapse).

Knowledge of data modeling and data warehousing concepts.

Excellent problem-solving and troubleshooting abilities.

Attention to detail and commitment to data accuracy.

Experience with cloud-based data migration.

Strong analytical and problem-solving skills, with ability to conduct root cause analysis on system, process or production problems and ability to provide viable solutions.

Experience working in an Agile environment with Scrum Master/Product owner and ability to deliver.

3+ years of programming experience in Python/Pyspark.

Knowledge of Jira, Confluence, SAFe development methodology & DevOps