ТОП просматриваемых книг сайта:
Profit Maximization Techniques for Operating Chemical Plants. Sandip K. Lahiri
Читать онлайн.Название Profit Maximization Techniques for Operating Chemical Plants
Год выпуска 0
isbn 9781119532170
Автор произведения Sandip K. Lahiri
Жанр Отраслевые издания
Издательство John Wiley & Sons Limited
As shown in the study of Mckinsey, the situation requires a vision and critical thinking to find which data to store in its original granularity and which to aggregate or pre‐analyze. In the case of relevance, the more classical “hypothesis‐driven” or “use case backwards” approach often delivers better results than the often praised “Big Data, brute force” approach.
Data layering is another critical requirement while handling enormous volumes of data of the chemical industry. Instead of overloading the data analytic algorithm with all types of raw data, carefully organizing it into several logical layers and then employing a logic by which to stack these layers can help generate more meaningful data.
2.6.2 Data Refinement is a Two‐Step Iteration
Once all relevant raw data of the chemical industry is captured and stored, the next step is the process of translating the enormous amount of unrefined raw data into actionable business insights. This is a very critical step and here lie all the challenges. The two‐step process is comprised of enrichment and extraction (Holger Hürtgen, 2018).
1 Step 1: Enriching data with additional information and/or domain knowledge. It is important to understand that a data engineer working alone is not sufficient to do this translation. A domain expert who runs the chemical company is absolutely necessary to enrich the data with additional domain knowledge – which is a somewhat more complex process. This essentially means that human expertise and domain knowledge is as important to making data useful as is the power of analytics and algorithms.
The blend of data analytics capability along with a domain expert's knowledge is still the optimal approach to data enrichment, although this may well change one day due to further developments in AI space. Therefore, before we even start with machine learning, we need to involve human experts who use their expertise to explain their hypotheses.
The task of a data scientist (or sometimes a data engineer at this stage) is now to translate, i.e. codify, additional information and/or this domain knowledge into variables. Concretely, this means transforming existing data into new variables – often also called feature engineering.
Man + machine example: Predictive maintenance in big compressors in the chemical industry. Deep knowledge in the data science field is important in choosing the right machine learning algorithm and in fine tuning models in a way that best predicts machine/component failure in compressors. At the same time, engineering domain knowledge specific to compressors will make a big difference when interpreting and identifying the root causes of failures. Sometimes data collected from other industries running with similar types of compressors can further optimize these models by providing additional ideas for drivers of failures that can be used in the predictive model to further improve the predictive power. Domain knowledge can also help to interpret the results and derive concrete business‐based solutions to prevent failures in the future. Finally, business knowledge is critical in implementing the recommendations, so that processes can be appropriately aligned, e.g. training maintenance engineers to schedule predictive maintenance using the outputs of the predictive maintenance model in their daily work.
1 Step 2: This step involves extracting insights using the machine learning algorithm. The purpose of this step is to select and run the appropriate machine learning algorithm and doing the actual maths and number crunching. The objective is to find the patterns in the data and feature selection. Though a sophisticated AI‐based algorithm is capable of finding all the features in the data, involvement of domain experts during this process helps to generate insights and improve the ability to explain the evolved solutions. Creating new features just helps the machine to find patterns more easily and also helps humans to describe and act on these patterns.
The purpose of this step is to utilize different machine learning algorithms to identify these patterns. Typically, one can distinguish among descriptive analytics (what happened in the past and why?), predictive analytics (what will happen in the future?), and prescriptive analytics (how can we change the future?). In all these steps, simple but also quite sophisticated methods can be used. More and more advanced techniques in AI and machine learning are being used due to the increased amount of available data and computing power. Figure 2.6 depicts how data science becomes an iterative process that leverages both human domain expertise and advanced AI‐based machine learning techniques.
Figure 2.6 Data science is an iterative process that leverages both human domain expertise and advanced AI‐based machine learning techniques
2.7 From Valuable Data Analytics Results to Achieving Business Impact: The Downstream Activities
The downstream part of the insights value chain is comprised of non‐technical components. It involves people, processes, and business understanding that – through a systematic approach – these new data‐driven insights can be operationalized via an overall strategy and operating model (Holger Hürtgen, 2018).
2.7.1 Turning Insights into Action
Once we have extracted important insights from the models, the next crucial step starts: turning these insights into action in order to generate a business impact. An example would be when a predictive maintenance model gives you warning as to when a compressor or some asset might break down, but maintenance is still required. It is very crucial to understand that just knowing the probability of a breakdown is not sufficient; prevention, not prediction, is the key to business impact. Turning insights into action thus requires two things: first, to understand the insights coming from the data analytics and to know what to do. Second, even once it becomes clear what action needs to be taken, success will depend on when and how that action is taken. This step is very crucial. Knowledge generation by data analytics software is not sufficient; taking corrective and preventive actions from these insights is the key driver for business impact.
2.7.2 Developing Data Culture
The long‐term success of digital transformation requires a company to develop a data culture. This essentially means developing a culture so that all of the business decisions of the company would be based on data analytics and the regular employees of the company would be equipped to implement the data analytics insights into their day‐to‐day actions. A company's internal structure and reward systems should be adopted in such a fashion that promotes the data culture.
2.7.3 Mastering Tasks Concerning Technology and Infrastructure as Well as Organization and Governance
In this step, the organization work process, culture, responsibility hierarchy, governance structure, etc., need to be changed in such a way that will facilitate organizations to take action on the insights from advanced analytics and create an impact. An organization needs the right set of easy‐to‐use tools – e.g. dashboards or recommendation engines – to enable personnel to easily generate business insights and a working environment that facilitates the integration of those insights, e.g. governance that enables and manages the necessary changes within the organization.
References
1 Holger Hürtgen, N.M. (2018). Achieving business impact with data. Retrieved September 25, 2019, from https://www.mckinsey.com/business-functions/mckinsey-analytics/ website: https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/achieving-business-impact-with-data.
2 Ji,