Software Engineer 2
Boomi is the platform for intelligent connectivity and automation. Connect everyone to everything, anywhere.
Job Description
Join us as a Software Engineer on our Software Engineering team in Pune to do the best work of your career and make a profound social impact.
About us
Boomi’s Cloud API Management, the pioneer in the API management space, was born out of key business needs in the market to enable faster app development and partner growth, provide better, secure API performance, and deliver data-powered insights. We help customers launch their API programs quickly to get new products and services to market faster and connect with business partners more easily to foster innovation and expand routes to market. We can also deliver successful API performance with integrated security, and ensure that analytics and a performance dashboard help users understand what works, what doesn’t, and how to improve governance for the API program.
How You’ll Make An Impact
-----------------------------
As a Software Engineer, you will be responsible for developing sophisticated Big Data systems and software based on the customer’s business goals, needs and general business environment. You will work within the engineering team on developing cutting edge new product features and enhancements across various areas of Boomi Cloud API Management services..
What You’ll Do
- Be a member of an Agile team, collaboratively realizing features through the software development lifecycle processes and following test driven development methodologies.
- Participate in design, development, unit testing, and deployment of Boomi’s Big Data Services including enhancements and/or resolution of any issues that may be reported
- Investigate and resolve complex customer issues
- Do Data Pipeline Development: Design, build, and maintain scalable data pipelines for processing large volumes of structured and unstructured data.
- Build Data Processing: Implement and maintain batch and real-time data processing systems using technologies like Apache Spark, Hadoop, or Kafka.
- Do Testing & Deployment: Implement automated testing and continuous integration/deployment (CI/CD) for big data systems to ensure robustness.
- Implement Data Governance & Security: Ensure compliance with data governance and security policies while working with sensitive data.
- Work within your team with a minimal level of guidance from technical leadership
The experience you will bring
---------------------------------
- Experience in designing, developing and maintaining software products that process Big Data streaming pipelines with a solid understanding of map reduce architecture and software development lifecycle processes.
- Working knowledge in programming languages such as Java or Python for data processing.
- Experience with data modeling, understanding of schema design for big data applications and with messaging framework that can handle big data loads like Kafka or Kinesis
- Experience with real-time data processing and stream-processing frameworks.like Spark or Flink
- Strong analytical skills and ability to solve complex problems related to big data processing and management.
- Experience with agile software development processes like Sprint models and collaboration tools, such as JIRA and Confluence.
- Strong understanding of Computer Science fundamentals; Algorithms and Data structures is required.
Bonus Points If You Have
- 2-4 years of experience in software development
- Familiarity with Devops tools like Harness, Jenkins and other CI/CD tools
- Familiarity with SQL and NoSQL databases, including Cassandra, MongoDB, HBase, or similar technologies.
- Familiarity with any cloud platforms like AWS, GCP, or Azure for deploying and scaling big data systems.
- Familiarity with Data ingestion tools like Fluentbit, Stash, Observability applications like Kibana, OpenSearch and Monitoring applications like Prometheus or New Relic.
- Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus.
Join us as a Software Engineer on our Software Engineering team in Pune to do the best work of your career and make a profound social impact.
About us
Boomi’s Cloud API Management, the pioneer in the API management space, was born out of key business needs in the market to enable faster app development and partner growth, provide better, secure API performance, and deliver data-powered insights. We help customers launch their API programs quickly to get new products and services to market faster and connect with business partners more easily to foster innovation and expand routes to market. We can also deliver successful API performance with integrated security, and ensure that analytics and a performance dashboard help users understand what works, what doesn’t, and how to improve governance for the API program.
How You’ll Make An Impact
As a Software Engineer, you will be responsible for developing sophisticated Big Data systems and software based on the customer’s business goals, needs and general business environment. You will work within the engineering team on developing cutting edge new product features and enhancements across various areas of Boomi Cloud API Management services..
What You’ll Do
- Be a member of an Agile team, collaboratively realizing features through the software development lifecycle processes and following test driven development methodologies.
- Participate in design, development, unit testing, and deployment of Boomi’s Big Data Services including enhancements and/or resolution of any issues that may be reported
- Investigate and resolve complex customer issues
- Do Data Pipeline Development: Design, build, and maintain scalable data pipelines for processing large volumes of structured and unstructured data.
- Build Data Processing: Implement and maintain batch and real-time data processing systems using technologies like Apache Spark, Hadoop, or Kafka.
- Do Testing & Deployment: Implement automated testing and continuous integration/deployment (CI/CD) for big data systems to ensure robustness.
- Implement Data Governance & Security: Ensure compliance with data governance and security policies while working with sensitive data.
- Work within your team with a minimal level of guidance from technical leadership
The experience you will bring
- Experience in designing, developing and maintaining software products that process Big Data streaming pipelines with a solid understanding of map reduce architecture and software development lifecycle processes.
- Working knowledge in programming languages such as Java or Python for data processing.
- Experience with data modeling, understanding of schema design for big data applications and with messaging framework that can handle big data loads like Kafka or Kinesis
- Experience with real-time data processing and stream-processing frameworks.like Spark or Flink
- Strong analytical skills and ability to solve complex problems related to big data processing and management.
- Experience with agile software development processes like Sprint models and collaboration tools, such as JIRA and Confluence.
- Strong understanding of Computer Science fundamentals; Algorithms and Data structures is required.
Bonus Points If You Have
- 2-4 years of experience in software development
- Familiarity with Devops tools like Harness, Jenkins and other CI/CD tools
- Familiarity with SQL and NoSQL databases, including Cassandra, MongoDB, HBase, or similar technologies.
- Familiarity with any cloud platforms like AWS, GCP, or Azure for deploying and scaling big data systems.
- Familiarity with Data ingestion tools like Fluentbit, Stash, Observability applications like Kibana, OpenSearch and Monitoring applications like Prometheus or New Relic.
- Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus.