High-growth startup opportunity
PartnerTap is actively changing the way sales and channel teams work across organizations. We are automating and streamlining the painful steps channel professionals go through to effectively work with their partners. Our ever-growing platform is used by several of the largest enterprises in the market, including SAP Concur, ADP, HPE, and many more.
Being an innovation-focused company, we care about using the right tools for the best results and are open to implementing creative solutions and new technologies. We care about our clients’ needs and ensure our product is fit to support them.
Job Type: Full Time
Post Date: February 8, 2022
Your vast expertise in building and optimizing Big Data pipeline is not purely technical. You are able to contribute to the team and company-wide decision-making processes and are able to identify challenges before they occur. You are passionate about introducing new algorithms and incorporating machine learning models into your work. You love building sustainable solutions from the ground up and want to make an impact on an entire company’s ability to deliver a smart and secure platform.
You love solving big problems with big ideas. You are excited by the prospect of optimizing and re-designing our company’s data architecture to support the next generation of products and data initiatives. You are independent, analytical, a quick learner, and enjoy working with people as smart as you are.
- Work is at the core of the PartnerTap product
- Improve the overall quality of data that drives better partnerships
- Analyze, slice, and dice huge volumes of data to explore the opportunities for better matching
- Build and maintain an optimal data pipeline architecture
- Identify, design, and improve data processing, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of large volumes of real-time data
Desired Skills and Experience::
- Experience building and optimizing Big Data data pipelines, architectures, and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Build processes supporting data transformation and workload management.
- Working knowledge of message queuing, stream processing, and highly scalable Big Data data stores.
- Experience with relational SQL and NoSQL databases, such as Postgres and Cassandra.
- Proven track record using big data tools such as Hadoop, Spark, Kafka, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with stream-processing systems: Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.