Mon.Feb 17, 2025

article thumbnail

No Python, No SQL Templates, No YAML: Why Your Open Source Data Quality Tool Should Generate 80% Of Your Data Quality Tests Automatically

DataKitchen

No Python, No SQL Templates, No YAML: Why Your Open Source Data Quality Tool Should Generate 80% Of Your Data Quality Tests Automatically As a data engineer, ensuring data quality is both essential and overwhelming. The sheer volume of tables, the complexity of the data usage, and the volume of work make manual test writing an impossible task to get done.

SQL 74
article thumbnail

Parallelize NumPy Array Operations for Increased Speed

KDnuggets

Enhance the array operational process with methods you may not have previously known.

Process 126
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Key Challenges in Determining Address Serviceability for Telecommunications

Precisely

I’ve been in the data business for nearly 30 years, and I’m still learning. Lately, I’ve been diving deep into the specific needs of telecommunication companies, particularly understanding the serviceability and “salability” of an address. Much of my career has been spent building data to accurately locate addresses for business intelligence (at GDT and Pitney Bowes) or navigation (at Tele Atlas and TomTom).

article thumbnail

Dynamic CSV Column Mapping with Stored Procedures

Cloudyard

Read Time: 2 Minute, 20 Second Loading CSV files into Snowflake is a common data engineering task. However, a frequent challenge arises when CSV files contain more columns than their corresponding Snowflake tables. In such cases, the COPY INTO command with schema evolution ( AUTO_CHANGE =TRUE) fails because it requires matching columns. To address this, Dynamic CSV Column Mapping with Stored Procedures can be used to create a flexible, automated process that maps additional columns in the CSV to

article thumbnail

A Guide to Debugging Apache Airflow® DAGs

In Airflow, DAGs (your data pipelines) support nearly every use case. As these workflows grow in complexity and scale, efficiently identifying and resolving issues becomes a critical skill for every data engineer. This is a comprehensive guide with best practices and examples to debugging Airflow DAGs. You’ll learn how to: Create a standardized process for debugging to quickly diagnose errors in your DAGs Identify common issues with DAGs, tasks, and connections Distinguish between Airflow-relate

article thumbnail

Textual Data Wrangling with Python: A Step-by-Step Guide

WeCloudData

Welcome back to our Data Wrangling with Python series! In the first blog of the data wrangling series, we introduced the basics of data wrangling using Python. We work on handling missing values, removing special characters, and dropping unnecessary columns to prepare our dataset for further analysis. Now, the next step is to deeply explore […] The post Textual Data Wrangling with Python: A Step-by-Step Guide appeared first on WeCloudData.

Python 52
article thumbnail

APC leverages Databricks for Outage and Storm Modeling

databricks

As we continue to navigate the complexities of the modern world, it's becoming increasingly clear that data-driven decision making is the key to.

IT 105

More Trending

article thumbnail

Introducing the DLT Sink API: Write Pipelines to Kafka and External Delta Tables

databricks

If you are new to Delta Live Tables, prior to reading this blog we recommend reading Getting Started with Delta Live Tables.

Kafka 71
article thumbnail

Become an AI Engineer for Free This Week

KDnuggets

Learn AI for free on DataCamp from February 17 to 23.

article thumbnail

Alabama Power leverages Databricks for Outage and Storm Modeling

databricks

As we continue to navigate the complexities of the modern world, it's becoming increasingly clear that data-driven decision making is the key to unlocking success.

IT 52
article thumbnail

Alabama Power Company leverages Databricks for Outage and Storm Modeling

databricks

As we continue to navigate the complexities of the modern world, it's becoming increasingly clear that data-driven decision making is the key to.

IT 52
article thumbnail

Mastering Apache Airflow® 3.0: What’s New (and What’s Next) for Data Orchestration

Speaker: Tamara Fingerlin, Developer Advocate

Apache Airflow® 3.0, the most anticipated Airflow release yet, officially launched this April. As the de facto standard for data orchestration, Airflow is trusted by over 77,000 organizations to power everything from advanced analytics to production AI and MLOps. With the 3.0 release, the top-requested features from the community were delivered, including a revamped UI for easier navigation, stronger security, and greater flexibility to run tasks anywhere at any time.