We value innovation, creativity and collaboration to a great deal. If you are excited to hear the word “mobile” and ready to work in a dynamic and exciting start-up environment, we would love to hear from you.
At Blesh, you will get to work side by side with a talented team with diverse backgrounds on mobile technologies and communications.
All available positions will be posted on this page and if you are interested, please click on the button to apply for the role.
We are looking for a Data Engineer to join our IT team in order to help us expanding and optimizing our data and data pipeline architecture, support data needs of the stakeholders, and work collaboratively with software engineers and business analysts.
The ideal candidate is an experienced engineer who is passionate about working with huge data, who enjoys optimizing data systems and building them from the ground up. The job description includes designing, building and managing data systems and databases. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. He/she must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.
- Responsible for the management of all data created within client applications, the structure of data held and the views of data created
- Responsible for recommending the correct technologies to be used and in the most cost-effective manner
- Propose design solutions and recommend best practices for large scale data analysis
- Create and maintain optimal data pipeline architecture,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Product and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Interpret business needs from requests, and rapidly implement effective technical solutions.
- Automate and improve creation/maintenance of reports and dashboards.
- Utilize data to resolve business issues in the most effective manner providing best reporting through dashboards and other means of communication through data.
- Proactively recommend operational efficiency and performance enhancements based upon data analysis.
- Demonstrate outstanding communication skills to translate reporting requests in order to accurately meet the actual information and deadline needs of users.
- Maintain source code repository of scripts (SQL, Python, etc.) and other data products (dashboards, reports, etc.).
- Iterate on our data warehousing strategy as the volume of our data increases, optimizing for data availability and efficient use of resources
- S. or M.S. degree on Computer Engineering
- 3+ years of experience on Data Analysis or Data Engineering
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL)
- Advanced knowledge and experience on database definition and schema design experience
- Experience on Linux command line and Bash scripting
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
The following assets are plus for the ideal candidate:
- Experience on Git/GitHub
- Experience on Amazon services like RDS, Redshift, S3, Kinesis, and EMR is a huge plus
- Experience on ML
- Proficiency in Node.js
- Experience with data pipeline and workflow management tools
- Experience or knowledge on big data tools (Hadoop, Spark, Kafka, etc.)
- Knowledge and experience on NoSQL databases
Skills & Experience:
Ability to scope a project based on a technical brief and work with the DevOps and QA teams to provide a detailed project plan including:
- Data Flow Diagrams for process flow
- Database Schemas & Normalisation
- Recommended software / plugins / architecture
- Scalable environment architecture suggestions
- Hosting, storage, load balancing and caching suggestions
- Performance considerations
- Security considerations
- Assumptions & Exclusions
- A complete and accurate estimate for the project