This site uses cookies to improve your experience. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country, we will assume you are from the United States. Select your Cookie Settings or view our Privacy Policy and Terms of Use.
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Used for the proper function of the website
Used for monitoring website traffic and interactions
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Strictly Necessary: Used for the proper function of the website
Performance/Analytics: Used for monitoring website traffic and interactions
Dagster for MLOps: Deep Dive into AI Orchestration Learn what it really takes to run production-grade ML systems—without breaking your architecture or compliance efforts. Join Dagster and Neurospace to learn: - How to build AI pipelines with orchestration baked in - How to track data lineage for audits and traceability - Tips for designing compliant workflows under the EU AI Act Register for the technical session DuckDB: DuckLake - SQL as a Lakehouse Format DuckDB announced a new open tabl
Introduction Marketing teams frequently encounter challenges in accessing their data, often depending on technical teams to translate that data into actionable insights.
In Airflow, DAGs (your data pipelines) support nearly every use case. As these workflows grow in complexity and scale, efficiently identifying and resolving issues becomes a critical skill for every data engineer. This is a comprehensive guide with best practices and examples to debugging Airflow DAGs. You’ll learn how to: Create a standardized process for debugging to quickly diagnose errors in your DAGs Identify common issues with DAGs, tasks, and connections Distinguish between Airflow-relate
Until now, sharing data between enterprise systems often meant complex pipelines, duplication, and lock-in. With Oracles support for Delta Sharing, thats no longer the case.
In a rapidly evolving financial landscape, one thing is clear: banks that prioritize agility and data-driven customer-centricity are not just staying afloattheyre thriving. During the recent American Banker webinar, Smart Banking in 2025: Intelligent Technologies Defining CX and Operations, I had the pleasure of speaking alongside Sarah Howell about the big shifts seen in bankingparticularly around digital transformation, compliance, and customer experience (CX).
Introduction In today’s digitally linked world, intuition is no longer sufficient to drive B2B marketing. Data analytics has developed as a critical component of effective marketing strategies, allowing companies to make educated decisions that improve performance and create quantifiable results. With vast amounts of client data available across digital channels, organizations that use data analytics may acquire a significant competitive edge.
Microsoft Fabric is a next-generation data platform that combines business intelligence, data warehousing, real-time analytics, and data engineering into a single integrated SaaS framework. Microsoft Fabric, which is based on the principles of governance, scalability, and simplicity, enables companies to handle their whole analytics lifecycle in one location.
Apache Airflow® 3.0, the most anticipated Airflow release yet, officially launched this April. As the de facto standard for data orchestration, Airflow is trusted by over 77,000 organizations to power everything from advanced analytics to production AI and MLOps. With the 3.0 release, the top-requested features from the community were delivered, including a revamped UI for easier navigation, stronger security, and greater flexibility to run tasks anywhere at any time.
Learn how to easily extract the data you need from Apache Kafka by generating Apache Flink SQL commands with natural language prompts or questions in this step-by-step demo.
Imagine entering a control room with complete control over your data ecosystem. You won’t have to deal with siloed systems, jump between tools, or write endless lines of code to make data useful. That’s how Microsoft Fabric works. With its ability to seamlessly integrate data engineering, analytics, and business intelligence, Microsoft Fabric stands out as the all-in-one superhero in a world where data is abundant but insights are scarce.
At Snowflake, our mission is to empower every enterprise to achieve its full potential through data and AI. We actively support innovative companies within our ecosystem that demonstrate clear value for our customers, which is why we're excited to invest in Honeydew , a former Snowflake Startup Challenge finalist. Honeydews Semantic Layer revolutionizes the way data teams collaborate on business intelligence and deliver impactful data-driven insights.
Speaker: Alex Salazar, CEO & Co-Founder @ Arcade | Nate Barbettini, Founding Engineer @ Arcade | Tony Karrer, Founder & CTO @ Aggregage
There’s a lot of noise surrounding the ability of AI agents to connect to your tools, systems and data. But building an AI application into a reliable, secure workflow agent isn’t as simple as plugging in an API. As an engineering leader, it can be challenging to make sense of this evolving landscape, but agent tooling provides such high value that it’s critical we figure out how to move forward.
Docker completely changed the development, packaging, and deployment of applications. Docker provides consistent environments from development to production by isolating applications in containers. The Dockerfile is the foundation of this ecosystem since it serves as a guide for creating Docker images. This blog covers everything from the fundamentals to more complex subjects like comparisons, troubleshooting, and best practices.
Speaker: Andrew Skoog, Founder of MachinistX & President of Hexis Representatives
Manufacturing is evolving, and the right technology can empower—not replace—your workforce. Smart automation and AI-driven software are revolutionizing decision-making, optimizing processes, and improving efficiency. But how do you implement these tools with confidence and ensure they complement human expertise rather than override it? Join industry expert Andrew Skoog as he explores how manufacturers can leverage automation to enhance operations, streamline workflows, and make smarter, data-dri
Enterprises are navigating a complex landscape marked by evolving challenges in privacy, economics and the rapid advancement of AI. Consumer data privacy is no longer just an expectation its a nonnegotiable, the foundation of consumer trust. Economic volatility has pushed companies to do more with less, demanding greater efficiency amid ever-changing regulations.
Today, we announced the dbt Fusion engine. Fusion isn't just one thing it's a set of interconnected components working together to power the next generation of analytics engineering. This post maps out each piece of the Fusion architecture, explains how they fit together, and clarifies what's available to you whether you're compiling from source, using our pre-built binaries, or developing within a dbt Fusion powered product experience.
At ThoughtSpot, were on a mission to empower every business user to become a data champion. Over the past year, Ive witnessed firsthand how organizations across Australia and New Zealand are embracing this vision, transforming the way they work, make decisions, and serve their customers. Today, Im excited to share some of the incredible momentum were seeing in the region and to celebrate the forward-thinking organizations leading the charge.
Introduction The transfer of data from Atlassian Jira to Google BigQuery facilitates the scalable analysis of engineering metrics, encompassing cycle time, throughput, and issue trends. This enables forecasting and planning through the utilization of historical data for predictive insights. Moreover, with the application of BigQuery ML or external AI tools, teams can leverage machine learning to forecast delivery delays, identify anomalies, or prioritize issues based on historical patterns.
With Airflow being the open-source standard for workflow orchestration, knowing how to write Airflow DAGs has become an essential skill for every data engineer. This eBook provides a comprehensive overview of DAG writing features with plenty of example code. You’ll learn how to: Understand the building blocks DAGs, combine them in complex pipelines, and schedule your DAG to run exactly when you want it to Write DAGs that adapt to your data at runtime and set up alerts and notifications Scale you
In the world of data analytics, Microsoft Fabric and Tableau stand out as powerful tools, but they have very different strengths. While Microsoft Fabric offers an all-in-one data platform for enterprises deeply integrated with Azure, Tableau focuses on intuitive, high-quality data visualization for users at all levels. This guide compares their features, architecture, pricing, and use cases to help you decide which is the best fit for your data strategy.
It sounds like a cliche to say it's a transformative time in telecommunications. But thats never been more accurate. Companies across the entire ecosystem are undergoing unprecedented change and incredible innovation across every aspect of the business. Fueled by efficiency, cost and customer experience pressures, telecoms must ensure that networks are not only highly reliable but easily adaptable to the rapidly changing needs of modern businesses and customers.
In a rapidly evolving financial landscape, one thing is clear: banks that prioritize agility and data-driven customer-centricity are not just staying afloattheyre thriving. During the recent American Banker webinar, Smart Banking in 2025: Intelligent Technologies Defining CX and Operations, I had the pleasure of speaking alongside Sarah Howell about the big shifts seen in bankingparticularly around digital transformation, compliance, and customer experience (CX).
Today, we announced that the dbt Fusion engine is available in beta. If Fusion works with your project today, great! You're in for a treat If it's your first day using dbt, welcome! You should start on Fusion you're in for a treat too. Today is Launch Day the first day of a new era: the Age of Fusion. We expect many teams with existing projects will encounter at least one issue that will prevent them from adopting the dbt Fusion engine in production environments.
In this new webinar, Tamara Fingerlin, Developer Advocate, will walk you through many Airflow best practices and advanced features that can help you make your pipelines more manageable, adaptive, and robust. She'll focus on how to write best-in-class Airflow DAGs using the latest Airflow features like dynamic task mapping and data-driven scheduling!
In today’s data-driven world, the role of an AWS Data Engineer is more important than ever! Organizations are on the lookout for talented professionals who can design, build, and maintain strong data pipelines and infrastructure on the Amazon Web Services (AWS) platform. If you’re eager to kickstart your career in AWS data engineering or ready to take it to the next level, mastering the interview process is essential.
When I was learning about watermarks in Apache Flink, I saw they were taking the smallest event times instead of the biggest ones in Apache Spark Structured Streaming. From that I was puzzled. How is it possible the pipeline doesn't go back to the past? The answer came when I reread the Streaming Systems book. There was one keyword I had missed that clarified everything.
Speaker: Ben Epstein, Stealth Founder & CTO | Tony Karrer, Founder & CTO, Aggregage
When tasked with building a fundamentally new product line with deeper insights than previously achievable for a high-value client, Ben Epstein and his team faced a significant challenge: how to harness LLMs to produce consistent, high-accuracy outputs at scale. In this new session, Ben will share how he and his team engineered a system (based on proven software engineering approaches) that employs reproducible test variations (via temperature 0 and fixed seeds), and enables non-LLM evaluation m
We organize all of the trending information in your field so you don't have to. Join 37,000+ users and stay up to date on the latest articles your peers are reading.
You know about us, now we want to get to know you!
Let's personalize your content
Let's get even more personalized
We recognize your account from another site in our network, please click 'Send Email' below to continue with verifying your account and setting a password.
Let's personalize your content