This site uses cookies to improve your experience. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country, we will assume you are from the United States. Select your Cookie Settings or view our Privacy Policy and Terms of Use.
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Used for the proper function of the website
Used for monitoring website traffic and interactions
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Strictly Necessary: Used for the proper function of the website
Performance/Analytics: Used for monitoring website traffic and interactions
I used VS Code, Sublime, Notepad++, TextMate, and others, but the shortcut with cmd(+shift)+end, jumping with option+arrow-keys from word to word, needed to be faster at some point. I was hitting my limits. Everything I was doing I did decently fast, but I didn’t get any faster. Vim is the only editor you get faster with time. Vim is based solely on shortcuts.
Matplotlib is the most famous and commonly used plotting library in Python. It allows you to create clear and interactive visualizations that make your data easier to understand and your results more concrete.
Introducing fully managed Apache Kafka® + Flink for the most robust, cloud-native data streaming platform with stream processing, integration, and streaming analytics in one.
Originally published on 5 January 2023. 👋 Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer Newsletter. We cover one out of seven topics in today’s subscriber-only The Scoop issue. To get this newsletter every week, subscribe here. For most engineering teams, returning from the winter holiday usually involves gradually getting back into the swing of things.
In Airflow, DAGs (your data pipelines) support nearly every use case. As these workflows grow in complexity and scale, efficiently identifying and resolving issues becomes a critical skill for every data engineer. This is a comprehensive guide with best practices and examples to debugging Airflow DAGs. You’ll learn how to: Create a standardized process for debugging to quickly diagnose errors in your DAGs Identify common issues with DAGs, tasks, and connections Distinguish between Airflow-relate
Throughout my time as a developer, I’ve used VS Code, Sublime, Notepad++, TextMate, and others. But shortcuts like cmd(+shift)+end and jumping with option+arrow-keys from word to word needed to be faster at some point. I was hitting my limits. Everything I was doing I did decently fast, but I didn’t get any faster. I’ve since learned that Vim is the only editor that you get faster using with time.
Check out this solid plan for learning Data Science, Machine Learning, and Deep Learning. The entire plan is currently available at no cost to KDnuggets readers.
🎙 A few week ago I did my first podcast with Robin. We talked about data engineering and everything around doing a weekly curation. This is the first episode of Robin's podcast in English and you should follow him because more are coming! In the podcast we talked about 🔥 My journey before launching the newsletter 🔥 Why and how I write 🔥 My main challenges as a Data Engineer 🔥 My favorite contents 🔥 What I like about data 🔥 A few tips f
🎙 A few week ago I did my first podcast with Robin. We talked about data engineering and everything around doing a weekly curation. This is the first episode of Robin's podcast in English and you should follow him because more are coming! In the podcast we talked about 🔥 My journey before launching the newsletter 🔥 Why and how I write 🔥 My main challenges as a Data Engineer 🔥 My favorite contents 🔥 What I like about data 🔥 A few tips f
Times are changing, and at a near-constant pace. With shifting customer preferences and disruptive world events shaking up the global supply chain market, many business leaders are left wondering whether they’ll be able to stay competitive. Supply chain automation technologies can have a big role to play when it comes to providing end-to-end visibility and risk mitigation for complex, data-intensive SAP processes in supply chain.
In this article, we are going to explore core open-source tools that are needed for any company to become data-driven. We’ll cover integration, transformation, orchestration, analytics, and ML tools as a starter guide to the latest open data stack. Let’s start with the Modern Data Stack. Have you heard of it or where the term came from?
Understanding and aligning with each business domain’s unique incentives and workflows is what ultimately makes data teams not just efficient, but great. Part one of this series looked at everyone’s favorite spreadsheet power users: the finance team. This article will examine how data teams can better conduct product experimentation and better align with product teams.
Apache Airflow® 3.0, the most anticipated Airflow release yet, officially launched this April. As the de facto standard for data orchestration, Airflow is trusted by over 77,000 organizations to power everything from advanced analytics to production AI and MLOps. With the 3.0 release, the top-requested features from the community were delivered, including a revamped UI for easier navigation, stronger security, and greater flexibility to run tasks anywhere at any time.
Read Time: 2 Minute, 32 Second During this post we will discuss multiple scenario on Clustering Tables. We will be analyzing and implementing the following scenarios in this post. Non Cluster to Cluster table : Create Clustering on Normal table and see the partitions pruning. CLONE Cluster table: CLONE the above Clustered table and analyze the Clustering.
Geospatial data has been driving innovation for centuries, through use of maps, cartography and more recently through digital content. For example, the oldest.
Environmental, social, and governance (ESG) initiatives are topics of discussion everywhere – in the workplace, social media, news outlets, and beyond. And for good reason. Recent public advocacy efforts around climate issues, diversity and inclusion, data privacy, and more have been driving forces in pushing ESG to the forefront. While stellar products and services used to be enough for businesses to attract new customers, investors, and employees – and win their loyalty over time – that’s not
Speaker: Alex Salazar, CEO & Co-Founder @ Arcade | Nate Barbettini, Founding Engineer @ Arcade | Tony Karrer, Founder & CTO @ Aggregage
There’s a lot of noise surrounding the ability of AI agents to connect to your tools, systems and data. But building an AI application into a reliable, secure workflow agent isn’t as simple as plugging in an API. As an engineering leader, it can be challenging to make sense of this evolving landscape, but agent tooling provides such high value that it’s critical we figure out how to move forward.
As we start 2023, our product marketing team has compiled a list of the top 10 features in Teradata Vantage which have immensely helped our customers and are technological breakthroughs.
“This blog is authored by Denis Kamotsky, Principal Software Engineer at Corning” Corning has been one of the world’s leading innovators in materials scien.
Speaker: Andrew Skoog, Founder of MachinistX & President of Hexis Representatives
Manufacturing is evolving, and the right technology can empower—not replace—your workforce. Smart automation and AI-driven software are revolutionizing decision-making, optimizing processes, and improving efficiency. But how do you implement these tools with confidence and ensure they complement human expertise rather than override it? Join industry expert Andrew Skoog as he explores how manufacturers can leverage automation to enhance operations, streamline workflows, and make smarter, data-dri
Does your company have a formal data strategy? If so, does that strategy effectively lay out a path toward better business outcomes by helping you optimize your use of data? Although it is a given that data can be one of a company’s real differentiators if used properly, many organizations still do not have a comprehensive data strategy in place.
In the past year, businesses who doubled down on digital transformation during the pandemic saw their efforts coming to fruition in the form of cost savings and more streamlined data management. Faced with even more pressure to remain resilient and agile amid looming global economic threats, Asia-Pacific (APAC) region businesses are looking to further mobilize emerging technologies such as artificial intelligence (AI) and machine learning that will optimize operational efficiencies and cost savi
Check our Solution Accelerator for Matrix Factorization for more details and to download the notebooks. Recommenders are a critical part of the modern.
With Airflow being the open-source standard for workflow orchestration, knowing how to write Airflow DAGs has become an essential skill for every data engineer. This eBook provides a comprehensive overview of DAG writing features with plenty of example code. You’ll learn how to: Understand the building blocks DAGs, combine them in complex pipelines, and schedule your DAG to run exactly when you want it to Write DAGs that adapt to your data at runtime and set up alerts and notifications Scale you
Companies need to analyze large volumes of datasets, leading to an increase in data producers and consumers within their IT infrastructures. These companies collect data from production applications and B2B SaaS tools (e.g., Mailchimp). This data makes its way into a data repository, like a data warehouse (e.g., Redshift), and is shown to users via a dashboard for decision-making.
As a radiologist-owned alliance built by physicians, Collaborative Imaging knows a thing or two about what it means to be healthy. And the same goes for their data. From revenue cycle management to telehealth, Collaborative Imaging ’s physician-conceived platform is solving some of the biggest technology challenges facing modern medical practices. And with hundreds of hospitals utilizing Collaborative Imaging’s data products to optimize their practices, data quality is paramount for the data tea
In this new webinar, Tamara Fingerlin, Developer Advocate, will walk you through many Airflow best practices and advanced features that can help you make your pipelines more manageable, adaptive, and robust. She'll focus on how to write best-in-class Airflow DAGs using the latest Airflow features like dynamic task mapping and data-driven scheduling!
In order to inspire DoorDash consumers to order from the platform there are few tools more powerful than a compelling image, which raises the questions: what is the best image to show each customer, and how can we build a model to determine that programmatically using each merchant’s available images? Figure 1: Discovery surfaces with merchant images Out of all the different information presented on the home page (see Figure 1), studies with consumers have repeatedly shown that images play the m
Manually managing the lifecycle of Kubernetes nodes can become difficult as the cluster scales. Especially if your clusters are multi-tenant and self-managed. You may need to replace nodes for various reasons, such as OS upgrades and security patches. One of the biggest challenges is how to terminate nodes without disturbing tenants. In this post, I’ll describe the problems we encountered administering Yelp’s clusters and the solutions we implemented.
Speaker: Ben Epstein, Stealth Founder & CTO | Tony Karrer, Founder & CTO, Aggregage
When tasked with building a fundamentally new product line with deeper insights than previously achievable for a high-value client, Ben Epstein and his team faced a significant challenge: how to harness LLMs to produce consistent, high-accuracy outputs at scale. In this new session, Ben will share how he and his team engineered a system (based on proven software engineering approaches) that employs reproducible test variations (via temperature 0 and fixed seeds), and enables non-LLM evaluation m
We organize all of the trending information in your field so you don't have to. Join 37,000+ users and stay up to date on the latest articles your peers are reading.
You know about us, now we want to get to know you!
Let's personalize your content
Let's get even more personalized
We recognize your account from another site in our network, please click 'Send Email' below to continue with verifying your account and setting a password.
Let's personalize your content