Rebounding From A Failed Venture & The Contexti Origin Story – An Interview With Sidney Minassian on Founder to Founder Podcast

Phil Hayes-St Clair, from Founder to Founder podcast recently interviewed Contexti – Big Data Analytics Founder & CEO Sidney Minassian about his entrepreneurial rollercoaster that started in 2000.

Prior to launching his current venture Contexti, Sidney built a project management and workflow software company, Think Software, serving financial markets, construction and professional services industries. He then moved to Silicon Valley, USA to launch his next venture, Liaise, a platform that used natural language processing of unstructured data in emails to improve personal and team productivity.

In this podcast, Sidney talks about:

  • Resilience
  • Learning from failure
  • The value of owning and accepting outcomes
  • The recovery process between ventures and how to reflect, recoup and rebuild
  • Why building a venture is all about people
  • Looking for scalable, repeatable business models
  • The Contexti origin story – positioning as a niche player in the emerging Big Data industry.


If you liked this episode of Founder to Founder, follow Phil Hayes- St Clair on soundcloud or download on iTunes.

Contexti’s Big Data as-a-Service In The Cloud Just Got Better With Cloudera Altus!

We’re excited by the recent announcement of our partner Cloudera on the availability of Altus, which takes the deployment of data platforms and data pipelines in the cloud to the next level.

“Leveraging AWS cloud and Cloudera Enterprise, Contexti has a track record of providing big data-as-a-service / big data platform services for Australian customers including for Seven West Media’s coverage of the Rio Olympic games.” said Sidney Minassian, Founder & CEO of Contexti. “With the availability of Cloudera Altus we’re looking forward to enhancing our service offering for customers who are leveraging their data for value creation.”

Seven West Media taps Cloudera and Contexti for Big Data Solution for Rio Olympics

Cloudera Altus features include:

  • Managed service for elastic data pipelines
  • Workload orientation
  • Backward compatibility and platform portability
  • Built-in workload management and analytics
  • Faster cluster provisioning times
  • Integrated security with cloud service provider solutions

To learn more about Altus, read Cloudera’s blog: Simplifying Big Data in the Cloud

Data Science Workbench by Cloudera Ready for Prime Time

In a fast evolving ecosystem of tools and libraries, data scientists are finding it difficult to use their existing open source languages (e.g. Python, R) and libraries with Hadoop, and are striving to bridge the gaps between the language of the data scientist and the speak of distributed systems.

Contexti partner Cloudera have just announced the general availability of Data Science Workbench. This powerful, self-service tool allows people to accelerate data science from exploration to production using R,Python, Spark and more.

Data scientists now have the freedom to share, collaborate and manage their data in a way that best suits them, resulting in an easier and faster path to production that is is secure for the enterprise.

“We are entering the golden age of machine learning and it’s all about the data.”

– Charles Zedlewski, Senior Vice President of Products at Cloudera.


To find out more about the Data Science Workbench, visit our partner Cloudera’s site.

How Kudu Enables Fast Analytics on Fast Data

From our partner, Cloudera’s website:

Kudu is a columnar storage manager developed for the Hadoop platform that runs on commodity hardware, is horizontally scalable, and supports highly available operation.

Kudu shares the common technical properties of Hadoop ecosystem applications and targets support for families of applications that are difficult  to implement on current generation Hadoop storage technologies.


Kudu’s design sets it apart. Some of Kudu’s benefits include:

  • Fast processing of OLAP workloads.
  • Integration with MapReduce, Spark and other Hadoop ecosystem components.
  • Tight integration with Impala, making it a good, mutable alternative to using HDFS with Parquet.
  • Strong but flexible consistency model, allowing you to choose consistency requirements on a per-request basis, including the option for strict serialized consistency.
  • Strong performance for running sequential and random workloads simultaneously.
  • Easy to administer and manage with Cloudera Manager.
  • High availability. Tablet Servers and Master use the Raft consensus algorithm, which ensures availability even if f replicas fail, given 2f+1 available replicas. Reads can be serviced by read-only follower tablets, even in the event of a leader tablet failure.
  • Structured data model.

To read more about Kudu and find out how to install it, go to our partner Cloudera’s website.


Decoding Big Data with Contexti on EchoJunction Podcast

On this podcast, Adam Fraser from EchoJunction interviews Contexti Founder & CEO, Sidney Minassian. They discuss:

  • What is Big Data?
  • What size of data is considered big?
  • How should organisations approach their big data objectives?
  • Who should be involved in big data projects?
  • Details on the Seven West Media Big Data Analytics solution for the Rio Olympic games supported by Contexti
  • Achieving Big Data ROI vs experimental R&D
  • Does the CMO have a seat at the table?
  • Implications of algorithms on individuals



Subscribe for updates on the latest big data analytics training courses, industry events and career insights.

[activecampaign form=13]


Interested in Social and Digital topics? Follow Adam Fraser on Twitter and subscribe to the EchoJunction podcast.

Beyond The Clouds – Towards Frictionless Value Creation

With AWS Summit in Sydney just around the corner, I’m suggesting we should be aiming beyond the clouds and towards frictionless value creation.

Whether you’re an enterprise customer consuming cloud services, an established AWS partner like Contexti, specialising in Big Data Analytics, or a startup thinking about leveraging Cloud, IoT, Artificial Intelligence or whatever tech comes next, we (ought to) understand that it’s not about the technology but the business enablement these technologies provide.

But I’m wondering what is possible if we change our aim ever so slightly? Can we improve our collective mindset and create better outcomes for our customers?

I’m suggesting we change the narrative from talking about ‘journey to the cloud’ (or any other tech) to ‘journey towards frictionless value creation’.

Consider for a moment all our experiences that are super convenient. Just thinking about my own past week, using Uber for getting a ride, using electronic tickets on my mobile to scan through my kids at the movies, using face recognition at the airport to speed up getting through customs, using contactless cards to make payments (Paywave / Tap and Go), using iTunes for buying and downloading music and now exploring Alexa, enabling me to get things done (order services, get answers from the internet) with voice commands.

In each of these examples the product and service vendors created frictionless value, which is why I will keep coming back to these services and will happily pay for the convenience. While these examples used here are consumer products and services to make the point easily relatable, I believe our experience in the enterprise (business to business) should be no different. Whether we like it or not, consumer experiences have set the bar for our expectations and has become what we need to achieve in the enterprise, that is, frictionless value creation.

So how do we get to ‘frictionless value creation’?

It’s a constant work-in-pogress. There is always additional friction we can be taking away from our customer’s experiences, and when we think we’re done, we’ll have another set of technology platform shifts, new service integrations or new set of expectations to meet.

Here are some considerations for getting started towards frictionless value creation:


#1 Reset Goals, Narrative & Mindset

As per my arguments above, if we change our message from ‘journey to the cloud’ to ‘journey towards frictionless value creation’ we are in effect setting a better goal, we’re changing our narrative and our mindset. This level of awareness alone will yield better conversations both internally and with our customers.

#2 Seek Problems

This won’t be the first or last article that will talk about ‘get to know your customers’. However to create frictionless experiences, near-enough won’t be good-enough. So we need to get deep with customers, understanding their intent, desires, environments, experiences and challenges to bubble up problems, blocks and frictions.

#3 Go Niche

To be successful, we need to start with a narrow focus, a tight niche and we must sweat the the small stuff. That’s what sets apart great products and services from all others.

#4 Look for Ecosystems

In an ever connected world, every product and service is part of an ecosystem. We should be looking out for the one or many ecosystems we play in or ought to play in. Then we need to figure out how we can benefit from the ecosystem and how we can add value to the ecosystem.

#5 Understand Emotion

To truly understand if we’re succeeding towards frictionless value, we must also understand the costs and benefits of the changes we are imposing on our customers. With all our so called ‘efficiencies’ we must not overlook the human factors that are often personal and emotional. We must keep in mind, innovation is technology change that results in value. People won’t accept the change if we’ve neglected their emotions.


By: Sidney Minassian – Founder & CEO, Contexti – Big Data Analytics


Subscribe for updates on the latest big data analytics training courses, industry events and career insights.

[activecampaign form=13]

6 Missing Drivers of Failed Big Data Projects in Australia

Given Contexti’s 50+ Big Data Solutions engagements and having provided Big Data Training to over 1,000 professional from over 200 enterprises across Australia, we’ve seen, heard and been involved with enough projects to draw some insights on what drives big data project success and failure.

At the same time, talking with our international business partners and counterparts, we find our observations, while made in Australia, are not unique to Australian enterprises and businesses across the globe will benefit by understanding the importance of the following drivers.

To help illustrate each driver below, we’ve provided an insight of how Contexti client Seven West Media positively addressed each driver, resulting in the delivery of one of the most successful Big Data Analytics projects in Australia.

#1 – Commercial Strategy

What’s the purpose of your big data project? Are you responding to business disruption? Are you leading with innovation? Which revenue lines have been impacted and which revenue lines do you intend to impact? Having a clear commercial strategy with the right narrative will help to align team members, limit initial scope and set the bar for getting a return on investment. No clear commercial strategy, a poorly defined strategy or the common ‘proof of concept’ approach of ‘let’s throw all our data in a data lake and see what we find’ will deliver a tyre kicking exercise, resulting in wasted time, money and missed opportunities.

Contexti Client Example: Seven West Media recognised ‘television was no longer just in the living room’ and they were now in the business of ‘quality content on any device, anywhere, any time’. This headline narrative guided the commercial strategy for Seven West Media’s big data project which was to find innovative and effective ways to engage with audiences on any platform. Engaged audiences directly correlate to increased revenue.

#2 – Defined Actions

Aligned to your commercial strategy, you need clarity on how you will take action with your new-found insights. Worse than not having the capability to find new insights from your data is when you have the insights but you’re unable to take actions on those insights. This require to step away from the data and think about systems, people, processes, customers and partners that will be impacted when you take action with your new insights.

Contexti Client Example: Seven West Media during the 2016 Rio Olympic Games leveraged their big data platform to find new audience insights and to personalise the viewer experience. Leveraging the insights they executed on a marketing program that delivered 2.7 million emails across 108 targeted campaigns. By defining and acting on their insights they achieved an ROI of a 29% lift in the average minutes streamed by users who were part of this targeted program.

#3 – Executive Engagement

All projects need executive sponsorship and support to get basic budget sign off, however some projects significantly benefit by having executives more actively engaged with the project. Big data projects fall into this category. With the right engagement, executives will provide headline messaging across the organisation of why this project exists, what are the benefits and importantly why departmental colleagues should pro-actively enable and support this project. There will come a time when requests will be made to departmental colleagues to make data available, make people available and make their expertise available and without the right mandate supported by an engaged executive your project is on-hold in the best case or dead in the extreme case.

Contexti Client Example: Seven West Media’s big data project was sponsored by its Chief Commercial Officer who was aware, active, vocal and visible across the project. With an engaged executive sponsor, the Director of Data and Business Intelligence who was driving the project had mandate to align internal departments and external services providers, ultimately delivering a successful project.

#4 – Team Composition

When considering the team composition for big data projects, many will be able to quickly identify the Data Scientist role, but will struggle to name other roles. Many project failures have been a result of unallocated or misallocated resources. A Data Scientist is a Data Scientist, they are not Platform Architects, Platform Engineers, Platform Administrators, Data Architects, Data Engineers, Commercial Strategist, Change Managers, Trainers or industry subject matter experts… to name just a few of the roles that need to be filed in a successful project. If you start with getting clarity on your commercial strategy (Driver #1) and work through how you will action your insights (Driver #2) then that should be a good start to thinking through your team composition. Of course depending on the size of your project and complexity of your mandate and budget you will take a different approach to composition of your team. Some roles may be internally filled, some will be full time roles, others will be part time roles, you might supplement your core team with contractors or you might outsource some functions to third party service providers. Key point here is don’t just hire a Data Scientist then wonder why the project failed.

Contexti Client Example: Seven West Media’s big data project team was comprised of many internally filled roles (Director of Data and BI, Data Architect, Data Scientist, Data Analyst), they also also engaged with other internal departments (e.g. marketing who executed on the targeted email campaigns) as well as third party providers such as Contexti who took care of the design, build and management of their big data platform (meaning Platform Architecture, Platform Engineering, Platform Administration, Data Engineering and Support was outsourced).

#5 – Technology Choice

Your big data solution needs to be powered by an appropriate technology that supports your commercial objectives and use cases (Driver #1). Further, the technology needs to be supported by a appropriately skilled, qualified and enabled resources (Driver #2). Too often we’ve seen technology choices being made in the absence of commercial objectives, use cases and clarity of how the technology will be supported in a production environment. What’s worse is we’ve also seen technology choices and decisions being made because of internal politics, pre-existing vendor relationships and by people who are not close enough to the issues of the project. This not only sets up the project for failure from a technology perspective, it also causes the project team to lose faith in the leadership and decision making process.

Contexti Client Example: Seven West Media had clarity on their technology choices. Leveraging the project leadership team’s prior experience and with clarity of their commercial objectives they identified there were two key challenges that they would focus on. Firstly, the variety and complexity of the data sources drove their decision towards selecting a mature, secure and scalable enterprise-grade data management platform at the core of their data solution (they selected Cloudera). Secondly, given the highly variable nature of the events they were covering (such as the Australian Open, Wimbledon, Rio Olympic Games etc) with variable amounts of traffic and data, they needed the ability to scale up and down in a flexible manner, driving their decision for a cloud-based solution (they selected Amazon Web Services). Further looking at the team composition, their skill-sets and familiarity with existing tools, the overall solution also integrated with Teradata on AWS, Tableau for visualisation and R for advanced analytics.

#6 – Agile Approach

With the first five drivers in place (Commercial Strategy, Defined Actions, Executive Engagement, Team Composition and Technology Choice) you want to execute effectively and demonstrate quick wins. Adopting an agile approach will enable to you test and learn, share the lessons-learned and move to your next iteration to meet your commercial objectives. Without these first five drivers in place, it’s difficult to be agile and we’ve seen many organisations fail as they have talked themselves into building what they perceived to be ‘the biggest, most flexible, most secure big data platform that could solve any big data use case’ but in reality solved none. These non-agile projects lost momentum and ultimately failed having spent so much time waiting for hardware to arrive, waiting for integrators to spend months building and securing their big data platforms, waiting for their internal departments to make data available, waiting for the newly hired data scientists to start and the list goes on. Without all six drivers in place all involved with the project had no sense of urgency, purpose or accountability.

Contexti Client Example: Seven West Media adopted an agile approach from day one. Within six weeks of Contexti being engaged (to design, build and manage a big data solution) an initial pilot program was run to support the coverage of the 2016 Australian Open Tennis event. Following that the solution was expanded to support the 2016 Wimbledon Tennis event, then more work was done to scale up the solution for Seven West Media’s coverage of the 2016 Rio Olympic Games in Australia.


By: Sidney Minassian – Founder & CEO, Contexti – Big Data Analytics


Subscribe for updates on the latest big data analytics training courses, industry events and career insights.

[activecampaign form=13]

Data & Analytics Australian Recruitment Market Insights by FutureYou

With the launch of their data and analytics practice lead by Caroline McColl, FutureYou Executive Recruitment have produced an insightful report on the data and analytics recruitment market in Australia. This 3 page reports gives a great snapshot on:

  • Market Moves – who are the new Chief Data Officers, Heads of Analytics and Heads of Insights?
  • In Demand Skills – do you have what the market is looking for?
  • Industry Pain Points – key challenges in getting value from data – are you having the right conversations?
  • Candidate Spotlight – what’s the going salary rate, for which skills and industry experience?
  • Skills Testing – why would you get your candidates skills tested?
  • Top 5 Drivers for Career Advancement – What’s the ideal combination of skills across business, technology and data science that will advance your career in big data analytics?

FutureYou have kindly permitted Contexti to share this report with those interested. If you’re hiring or are seeking to get hired this report will provide some food for thought.

[activecampaign form=15]

5 Tips On How To Land A Big Data Job In Australia

In today’s Australian job market if you’ve got some Big Data experience, you’re mostly likely getting approached by recruiters and are probably spoilt for choice. If that’s the case, you don’t need to read on. This article is for the rest of you, who have heard about this ‘Big Data’ thing and are wondering how to get your foot in the door.

Given Contexti’s focus on the Big Data Analytics market in Australia, we’re fortunate to be aware of and in many cases involved in a broad range of Big Data Analytics related conversations, deals, projects, partnerships, hires, fires and events across Australia. The single biggest challenge we constantly hear about is the shortage of qualified and experienced ‘Big Data’ people.

While we don’t advise our customers to drop their standards in the quality of their hires, we do strongly warn against holding on to the belief that there is a magical unicorn big data guru out there. Instead we suggest organisations hire professionals with the right fundamentals (e.g. fit for culture & values, coachable, possess skills in certain technologies or analytics methods, etc) and implement a plan to develop them into capable Big Data Analytics practitioners.

Similarly we’ve found ourselves having conversations with a broad range of professionals, some who are just starting our their careers and are thinking about graduate roles while others with decades of experience who now want to transition into a career in the growing Big Data Analytics space.

Like everything else in business and life there are no silver bullets, but if you approach this in a strategic and tactical manner, you will massively improve the odds in your favour.

So here are five tips to help you land a big data job in Australia:

#1 Define your target role

While ‘data scientist’ sounds like an exciting role, it may not be the right entry point for you. You want to get into a role where you will learn and where you will also quickly add value by bringing something to the table. To do this, think about your ‘home ground advantage’, what skills, experience or connections do you already have and map it to the closest Big Data role in the an industry most suitable to you.

Some real-world examples we are aware of:

  • Our own Damion Reeves at Contexti transitioned from being an experienced Database Administrator (DBA) with years of experience in infrastructure, Oracle and SQL to a Big Data Platform Engineer. While Hadoop and Spark were technologies he needed to learn, his underlying experience with Linux and UNIX, capabilities in shell scripting and knowledge of enterprise support and service protocols were immediate value-adds to Contexti and to our customers.
  • Our client Sharmaine Salis Head of Data Architecture at Seven West Media transitioned from a traditional Business Intelligence / Data Warehouse solutions role into a Big Data / Cloud Architect role, leading one of the most successful big data projects in Australia which underpinned Seven’s Rio Olympics games coverage.
  • One of our Hadoop & Spark training students, MingJian Tang currently a Cyber Security Data Scientist at the Commonwealth Bank of Australia (CBA), transitioned into this role from a statistics and data mining background.
  • Broader in the field we’ve seen someone with solid Telco background move into a Big Data Strategy role for one of the Telcos trying to monetise their data assets.

So the take-aways are:

  • There are many potential Big Data Analytics roles (Commercial Strategist, Platform Architect, Data Architect, Platform Engineer, Data Engineer, Analyst, Data Scientist, Project Manager, Quality Assurance, Sales, Business Development, Customer Success etc).
  • No one-person will be qualified to do all the available roles in Big Data.
  • Find your home ground advantage and target a role that gets you excited and one where you can add value quickly.

#2 Skill up

You will massively improve your chances in landing a role if you’ve invested in skilling yourself up. The one obvious benefit is the theoretical and in many cases the practical knowledge you will gain by attending formal training. The other not so obvious benefit is the network of relationships you will create with the instructor and other class participants. Depending on the role, your budget, time availability etc there are many courses to take advantage of. Here are some of the short to long training and certification programs we are aware of:

#3 Network

An important factor in landing a new role is ‘who you know and who knows you’. Networking enables you to to build relationships, get known, learn something new and contribute. There are many meetup groups and networking events. Here are some of the ones we attend:

#4 Be found

There are many ways to get your name out there and to be found. Speaking at events and meet ups, writing guest blogs posts, publishing your work in online forums (GitHub, Slideshare etc), getting active on Twitter and Quora. The simplest and most obvious one however is to put effort in your LinkedIn profile. After you consider your target role and your home ground advantages (existing skills, industry experience etc) as well as your training and up-skilling strategy, you should update your LinkedIn profile.

Your profile should be authentic. This means stating correctly what you have done, skills you possess and how much experience you actually have. Further an authentic profile should include objectives, aspirations and current activities you are undertaking to improve yourself giving the potential recruiter an idea of not only where you’ve been but a view of where you are headed.

A recent example was when I was doing a search on LinkedIn for anyone who had included “Data Science” in their profile. I came across a professional who had recently completed a data science course in addition to having a math and statistics major and hands-on actuarial work experience. His LinkedIn headline said ‘Aspiring Data Scientist’. The word ‘aspiring’ gave me an indication of where he was headed and what he was looking for yet it was authentic as he wasn’t claiming to be an experienced data scientist.

This approach can be applied to your LinkedIn headline and your summary where you can include your ‘elevator pitch’ of who you are, where you’ve been, what your great at and where you are heading.

#5 Look for early signals

To narrow down your targeting efforts and improve your odds, look for early signals that might lead you to a future job opportunity. Typically this will be keeping your eyes open on LinkedIn, subscribing to relevant industry news and blogs, reading mainstream business and technology news and being an active networker. Early signals you should keep your eyes on include: companies announcing changes in strategy, appointment of new leaders, new partnerships or vendors winning contracts.

For example in the last six months in Australia there have been a number of executive movements in the Chief Data Officer and Chief Digital Officer roles, this kind of appointment usually indicates a company is reprioritising ‘data’ as a strategic priority and is usually followed by a restructure and a recruitment drive. There have also been a number of public announcement of data deals and data partnerships as well vendors announcing contracts with new customers or publishing case studies of success stories with existing customers.

All of these are early signals that will give you hints on people, companies, technologies and deals to follow and target in order to land your next big data job.


By: Sidney Minassian – Founder & CEO, Contexti – Big Data Analytics


Subscribe for updates on the latest big data analytics training courses, industry events and career insights.

[activecampaign form=13]

7 Ways to Increase Your Value as a Data Scientist

As a Data Scientist, you are currently in high demand and in a hot market. That is not about to change any time soon.

So on the one hand, you can ignore this article.

Yet on the other hand, given the 40+ big data projects we’ve delivered for Contexti clients, we’ve had a lot of interactions with Data Scientists inside companies of varying sizes and industries and my observation is that Data Scientists are leaving value on the table for themselves and are therefore limiting their career and leadership trajectories.

There are probably a number of reasons for this, but primarily it’s because Data Scientists are allowing themselves to be pigeonholed into being ‘just the data person’.

You will have greater value as a Data Scientist when:

  • you have established credibility BEYOND being just the data person
  • you get a seat at the strategic table to discuss the business and customer context; and
  • your efforts result in measurable impact on the organisation

So here are 7 ways you can increase your value as Data Scientist:

#1 Know the business

When you are on the ‘same page’ as the business you will engender a deeper level of conversation, you will ask better questions, you will push back on the right issues and overall you will command the respect of your colleagues beyond your analytics brilliance. You should know and be able to quickly articulate key business and profit details such as:

  • what industry you are in;
  • what are the top performing product / service lines;
  • what are your best channels to market;
  • who are your primary customers;
  • who are your most strategic partners (and why? what’s in it for them);
  • who are your biggest competitors and what is their strategic or competitive advantage;
  • who are your likely unexpected competitors and
  • who is going to disrupt you or your industry externally ◦ etc


#2 Get to know your Customer’s Customer

Don’t settle with just understanding the ‘Marketing’, ‘Risk’, ‘Operations’ departments as your customers. While they may be your direct customers, you should also care to learn about your customer’s customer. Who are they serving? Request that you join your customer when they meet with their customers, this will give you another level of context, perspective and depth in understanding the ‘end customer’. By getting to know your customer’s customer, you will think differently about the problem you are solving and you will have a different conversation and create a higher level of rapport with your direct customer.

#3 Beyond the WHAT and the HOW…. Ask WHY

It’s easy to jump into problem solving mode. The question is are you solving the right problems? Often you’ll have clarity on ‘What’ you need to do and given your skills you’ll know the ‘How’. To increase your relevance and value, make sure you are also clear on the ‘WHY’. By understanding the ‘Why’ you will think creatively about the problem and solution – if you understand the ‘why’ you may recognise you’ve been tasked with the incorrect ‘what’. Even the conversation of the ‘why’ will help build trust between you and the people you are collaborating with. •

#4 Step away from the data

To get context and perspective, step away from the data, models and charts and put yourself in position to observe what you are meant to be measuring or solving in its physical form. Things beyond the numbers will jump out at you that can dictate the success or failure of your solutions. Certain industry nuances, the political landscape of the organisation, the organisation’s readiness to adopt change, culture and values, the user experience and the customer journey will all give you greater levels of insight beyond the numbers.

#5 Seek the ACTION

The best insights, not executed will create zero value. So in addition to understanding the ‘why’, seek to understand the ‘actions’ that will be taken given your insights. Often this will be outside of your domain or direct sphere of influence and that is exactly the point. To move beyond being ‘just the data person’ you should seek clarity (accountability) from your colleagues on what will be done with your insights, in what time frame and how you and your colleagues will be informed about the impact.

#6 Build bridges with people

The right team composition is critical to ensure success with data projects. In addition to Data Scientists you need customer advocates, subject matter experts, platform architects, platform engineers, data engineers, platform administrators, marketing/operations/risk/legal experts etc. So as a Data Scientist, build relationships with these colleagues, they are all important contributors to delivering success. You will learn from them, you will teach them and most importantly you will have established a bridge, which will raise your value. •

#7 Over-Communicate

We often hear Data Scientists need to be ‘story tellers’, often this is only interpreted as ‘story telling with the numbers’. I suggest that you should not to wait for just the ‘story with the numbers’ part of the project before you find your voice. It’s important to bring people on the journey and you can do this by communicating (over communicating). Share with them your understanding of the ‘business’, your knowledge of the ‘customer’s customer’, the ‘why’ of what you are working on, the insights you gained by ‘stepping away from the numbers’ and how you expect the your insights to be turned into ‘actions’ to deliver value. Share with people your experimentations, your success, failures and learnings. You will learn from their feedback and corrections, you will build respect with your openness and willingness to share and teach and you will establish your voice in your organisation.