dispatch from data day texas 2026

what I learned at the final Data Day Texas

Data Day Texas 2026 concluded and will be the last Data Day Texas in this format. I have a few interesting notes and then an observation on this final Data Day Texas.

keynote

The Skills that Matter - when everything changes by Patrick McFadin (slides)

The main takeaway was that there were still three core skills that have to grow over time for software engineers:

  • judgement: what is the right decision given limited information and the decider’s context?
  • adapatability: how do you assimilate the latest knowledge in your field?
  • community: how do you join others who are in your field rather than going it alone?

Agreed on the first two. I’m not sure about the third, but I’m not good at participating in tech communities, so maybe that’s me. I also thought one skill was missing: discernment. This is how you are able to cut through noise in our space. AI sunshine pumpers can really drag good AI efforts down. Discernment is how you’ll filter out the dirt to find the gold. This is different from judgement, which is about making the right decision, although the two are related.

The controversial part of the talk to me was the claim that we are reaching the “post-database era” where databases all converge on the same storage-ish with the same API-ish. That includes PostgreSQL, Elastic, Cassandra, Snowflake, Redis, mongoDB, etc. I do think data access patterns are coalescing, but claiming that we’ll all essentially be using copies of the same database system, I’m not sure. Also, I think this is foretelling of why this conference is the last Data Day Texas. More on that later.

The talk had some other fun tidbits, classic of a keynote, such as the first commercial database (Sabre for airline reservations) and how much automation panic there has been since the dawn of automation.

context engineering

Context > Prompts by Lena Hall

This talk was a long winding talk on different parts of AI-assisted engineering. I’m not usually a fan of developer relations talks due to lack of depth, but this talk gave me healthy food for thought, so I enjoyed the talk.

Part 1 is the patterns. In early 2025 we used targeted prompts with strategic restarts for context cleanup. In late 2025 we moved to spec(ticket)-driven development thanks to code scanning tools. Now we plan-then-build:

  • the programming agents plan the work,
  • we review,
  • the agent generates artifacts,
  • and we review the artifacts at the end.

What’s the next pattern? Unsure, but I know advice for now is to skip to auto-accepting edits and reviewing later.

Part 2 is the architecture. There’s been a burgeoning argument to decouple planning from execution. Right now most have Claude Opus 4.5 for planning and Claude Sonnet 4.5 (or 4) for writing. But why not Gemini 3 Flash for execution? Are plans really that specific to the agent? Is the new architecture SQL-like: write/review the query, then to debug review the artifacts?

Part 3 is the organization. I need the first two to explain why this is the important brain food. Given plan review is the focus, I wonder if I should treat each project like a startup. Then, instead of hiring for that role, write/use an agent spec. How far can you get? The latest experiment is of course Steve Yegge’s Gas Town but what if you did not generalize? Your design document already has a task breakdown; can you add the agents you need to execute? Will engineering be more puppetmaster and less woodworking? Since this is software, I have infinite wood because I can copy-edit-use as many agents as storage allows. This is my most important takeaway from Data Day Texas 2026: I should start treating software projects like startups with agents.

local LLMs

Local AI Saves People by Chris Brousseau

This talk was about running LLMs locally. I talked with Chris afterwards, interesting engineer. Chris has a book too: LLMs in Production. Chris actually has a video of the work that Chris presented. I ould say it’s just a good reference and possibly the book is better to get. Others are doing this now but this was my most important action item from Data Day Texas 2026: I should run an LLM on my desktop at home. Even if it is slow, it’s like when I built computers in the 2000s: it’s the fun part of engineering.

Chris did push for others to run local LLMs in production or at least for programming work. I’m not convinced, I’ll need to see how Claude Code or OpenCode work when pointed at a local LLM.

other talks

Rewriting SQLite in Rust by Glauber Costa was more about the story than about the tech so unfortunately I didn’t get what I wanted (a technical deep-dive).

Stop Guessing, Start Measuring by Jon Haddad had some cool outputs and Apache Cassandra advice. No critical takeaways from me here though, just a fun talk.

The Weaponization of AI by Trey Blalock was more of a rattled-off list of fun websties. The most thorough one was APT threat tracking. Cool talk but no takeaways, more stories.

on the conference itself

The organizer said that there was a belief that the event get-together model at scale – the “conference” – was dead. I am unsure based on the number of people at vendor conferences, at other academic conferences, and at industry conferences. I have a different take: there is no data in the “data day” anymore. In 2024, there was the inclusion of AI in Data Day Texas. As a result, more esoteric or targeted data concepts were replaced by talks on AI usage. 2023 saw Zhamak Deghnani talk about the data mesh; that concept wasn’t even remotely a part of this edition. Data Day Texas’s final keynote inadvertently came to this conclusion as well. If database technologies are coalescing, then where is the novel data engineering?

The organizer said even in 2023 that this was a conference for practitioners. In 2026, that meant no more data nerds.

Published by using 1004 words.