Job title: Senior Software Engineer 8846
Job role: Other Role
Country: United States
Company name: UiPath
Employment Type: Full Time
Seniority level: Senior more than 5 years
Life at UiPath
The people at UiPath believe in the transformative power of automation to change how the world works. We’re committed to creating category-leading enterprise software that unleashes that power.
To make that happen, we need people who are curious, self-propelled, generous, and genuine. People who love being part of a fast-moving, fast-thinking growth company. And people who care—about each other, about UiPath, and about our larger purpose.
Could that be you?
The data foundation team is building a big data pipeline ecosystem of UiPath. We provide scalable and reliable ETL systems to UiPath’s offerings and are on an ambitious path to empower all teams across UiPath to surface their data via a common data platform. We own core storage and data pipelines, data APIs, analytics, and real-time monitoring scenarios for a growing list of products and scenarios.
Must be in Bellevue, WA area
What you’ll do at UiPath
• Design, develop, and maintain data pipelines that integrate data from various sources, ensuring
efficient and reliable data ingestion and processing.
• Collaborate with product teams, data scientists, analysts, and other stakeholders to gather
requirements and understand data needs.
• Optimize data pipelines for quality, performance and scalability, utilizing best practices in data
management and ETL (extract, transform, load) processes.
• Monitor and troubleshoot data pipeline issues, implementing fixes and improvements as needed to
ensure timely and reliable data delivery.
• Stay current with industry trends and technologies, incorporating new techniques and tools to
continuously improve the data pipeline architecture and performance.
What you’ll bring to the team?
• Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
• Proven track record (8 years’ experience) of architecting and engineering world-class, large-scale
commercial applications and services
• 3+ years of experience in data engineering, ETL, or a similar role, with a strong focus on data
pipeline development and optimization.
• Proficiency in programming languages such as Python, Java, or Scala.
• Experience with big data technologies and frameworks, such as Snowflake, Spark, Flink, or other
• Familiarity with data pipeline orchestration tools, such as Apache Airflow, Luigi, Azure Data
Factory, or similar tools.
• Strong knowledge of SQL and experience working with various databases (e.g., Snowflake,
SparkSQL, MSSQL, DBT, etc).
• Experience with cloud-based data storage and processing platforms, such as AWS, GCP, or Azure.
• Strong problem-solving skills and the ability to work independently as well as collaboratively in a
• Excellent communication skills, with the ability to articulate complex technical concepts to non-
Maybe you don’t tick all the boxes above— but still think you’d be great for the job? Go ahead, apply anyway. Please. Because we know that experience comes in all shapes and sizes—and passion can’t be learned.
We value a range of diverse backgrounds, experiences and ideas. We pride ourselves on our diversity and inclusive workplace that provides equal opportunities to all persons regardless of age, race, color, religion, sex, sexual orientation, gender identity and expression, national origin, disability, neurodiversity, military and/or veteran status, or any other protected classes.