Скачать книгу

menu, select «Diagnostic settings

      – «dd diagnostic settings and configure destinations such as Azure Monitor, Azure Storage, or Event Hubs. Configure diagnostic settings to send logs to Azure Monitor, Azure Storage, or other destinations. This helps in monitoring and auditing activities within your Synapse Analytics workspace.

      By following these examples and best practices, you can establish a robust security posture for your Azure Synapse Analytics environment. Regularly review and update security configurations to adapt to evolving threats and ensure ongoing protection of your valuable data.

      Chapter 3. Data Ingestion

      3.1 General Overview of Data Ingestion in Modern Data Engineering

      Data ingestion is the process of collecting, importing, and transferring raw data from various sources into a storage and processing system, often as part of a broader data processing pipeline. This fundamental step is crucial for organizations looking to harness the value of their data by making it available for analysis, reporting, and decision-making.

      Key Components of Data Ingestion:

      Data Sources: Data can originate from a multitude of sources, including databases, files, applications, sensors, and external APIs. These sources may contain structured, semi-structured, or unstructured data. Below are specific examples:

      Diverse Origins:

      Data sources encompass a wide array of origins, reflecting the diversity of information in the modern data landscape. These sources may include:

      Databases: Both relational and NoSQL databases serve as common sources. Examples include MySQL, PostgreSQL, MongoDB, and Cassandra.

      Files: Data is often stored in various file formats, such as CSV, JSON, Excel, or Parquet. These files may reside in local systems, network drives, or cloud storage.

      Applications: Data generated by business applications, software systems, or enterprise resource planning (ERP) systems constitutes a valuable source for analysis.

      Sensors and IoT Devices: In the context of the Internet of Things (IoT), data sources extend to sensors, devices, and edge computing environments, generating real-time data streams.

      Web APIs: Interactions with external services, platforms, or social media through Application Programming Interfaces (APIs) contribute additional data streams.

      Structured, Semi-Structured, and Unstructured Data:

      Data sources may contain various types of data, including:

      – Structured Data: Organized and formatted data with a clear schema, commonly found in relational databases.

      – Semi-Structured Data: Data that doesn’t conform to a rigid structure, often in formats like JSON or XML, allowing for flexibility.

      – Unstructured Data: Information without a predefined structure, such as text documents, images, audio, or video files.

      Streaming and Batch Data:

      Data can be generated and ingested in two primary modes:

      Batch Data: Involves collecting and processing data in predefined intervals or chunks. Batch processing is suitable for scenarios where near-real-time insights are not a strict requirement.

      Streaming Data: Involves the continuous processing of data as it arrives, enabling organizations to derive insights in near-real-time. Streaming is crucial for applications requiring immediate responses to changing data conditions.

      External and Internal Data:

      Data sources can be classified based on their origin:

      External Data Sources: Data acquired from sources outside the organization, such as third-party databases, public datasets, or data purchased from data providers.

      Internal Data Sources: Data generated and collected within the organization, including customer databases, transaction records, and internal applications.

      Data Movement: The collected data needs to be transported or copied from source systems to a designated storage or processing environment. This can involve batch processing or real-time streaming, depending on the nature of the data and the requirements of the analytics system.

      Successful data movement ensures that data is collected and made available for analysis in a timely and reliable manner. Let’s explore the key aspects of data movement in detail:

      Bulk loading is a method of transferring large volumes of data in batches or chunks, optimizing the transportation process. Its key characteristics are:

      Efficiency: Bulk loading is efficient for scenarios where large datasets need to be moved. It minimizes the overhead associated with processing individual records. And

      Reduced Network Impact: Transferring data in bulk reduces the impact on network resources compared to processing individual records separately.

      Bulk loading is suitable for scenarios where data is ingested at predefined intervals, such as daily or hourly batches. When setting up a new data warehouse or repository, bulk loading is often used for the initial transfer of historical data.

      Data Transformation: In some cases, data may undergo transformations during the ingestion process to conform to a standardized format, resolve schema mismatches, or cleanse and enrich the data for better quality. Data transformation involves:

      Schema Mapping: Adjusting data structures to match the schema of the destination system. It is a critical aspect of data integration and transformation, playing a pivotal role in ensuring that data from diverse sources can be seamlessly incorporated into a target system with a different structure. This process involves defining the correspondence between the source and target data schemas, allowing for a harmonious transfer of information. Let’s explore the key aspects of schema mapping in detail.

      In the context of databases, a schema defines the structure of the data, including the tables, fields, and relationships. Schema mapping is the process of establishing relationships between the elements (tables, columns) of the source schema and the target schema.

      Key characteristics of schema mapping are Field-to-Field Mapping and Source Field to Target Field. Each field in the source schema is mapped to a corresponding field in the target schema. This mapping ensures that data is correctly aligned during the transformation process.

      Data Type Alignment: The data types of corresponding fields must be aligned. For example, if a field in the source schema is of type «integer,» the mapped field in the target schema should also be of an appropriate integer type.

      Handling Complex Relationships: In cases where relationships exist between tables in the source schema, schema mapping extends to managing these relationships in the target schema. Schema mapping is essential for achieving interoperability between systems with different data structures. It enables seamless communication and data exchange. In data integration scenarios, where data from various sources needs to be consolidated, schema mapping ensures a unified structure for analysis and reporting. During system migrations or upgrades, schema mapping facilitates the transition of data from an old schema to a new one, preserving data integrity.

      Data Cleansing is a foundational and indispensable process within data management, strategically designed to identify and rectify errors, inconsistencies, and inaccuracies inherent in datasets. This critical step involves a multifaceted approach, encompassing the detection of anomalies, standardization of data formats, validation procedures to ensure accuracy, and the adept handling of missing values. The overarching significance of data cleansing is underscored by its pivotal role in bolstering decision-making processes, elevating analytics to a more reliable standard, and ensuring compliance with regulatory standards.

Скачать книгу