Roelant Vos An expert view on Agile Data Warehousing

0

Running SSIS packages continuously without scheduling

 

 No more Batch ETL A few weeks ago I wrote a post about the concept of having continuous execution of ETL individual processes to achieve ‘eventual consistency‘. In that post I made the case to step away from ‘Batch’ execution of ETLs, where related processes are executed as a mini workflow, in favour if fully independent execution of the individual (modular) ETL processes. I have spend some time developing this concept in SQL Server using...

2

Beyond ETL Generation & DWH Virtualisation – what’s next?

 

 Bridging the gap At the recent Data Modelling Zone (DMZ) in Germany I presented an overview of the ideas around Data Warehouse Virtualisation and the thought processes leading up to this. In this post I wanted to elaborate on some of these ideas a bit further, as together they can be combined to allow something even more powerful. This post provides an overview of how various (technical) concepts together can help faster delivery of meaningful...

1

Embrace your Persistent Staging Area for Eventual Consistency

 

 If you like your PSA so much… A colleague of mine asked me this: ‘if you like the Persistent Staging Area (PSA) concept so much, why not embrace it all the way?’. By this, he meant loading upstream layers such as the Data Vault directly from the PSA instead of from a Staging Area. I was a bit resistant to the idea at first, because this would require incorporation of the PSA as a mandatory...

1

Biml Express 2017 tests, comments and work-arounds

 

 The new version of Biml Express, the free script-based ETL generation plug-in for Visual Studio provided by Varigence, has been out for a few months. Mid-July 2017 to be precise. However only recently I have been able to find some time to properly regression-test this new release against my library of patterns / scripts. The driver is the upcoming Data Modelling Zone event and Data Vault Implementation & Automation training sessions – better keep up...

0

Updated sample and metadata models for Data Vault generation and virtualisation

 

 After a bit of a pause in working on the weblog and technology (caused by an extended period of high pressure in the day job) I am once again working on some changes in the various concepts I’m writing about on this site. Recently I was made aware of this great little tool that supports easy creation and sharing of simple data models: Quick Database Diagrams (‘QuickDBD’). The tool is 100% online and can be...

4

Using a Natural Business Key – the end of hash keys?

 

 Do we still need Hash Keys? Now there is a controversial topic! I have been thinking about the need for hash keys for almost a year now, ever since I went to the Data Vault Day in Germany (Hamburg) end of 2016. During this one-day community event, the topic of stepping away from hash keys was raised in one of the discussions after a case study. Both the presentation and following discussion were in German,...

0

When a full history of changes is too much: implementing abstraction for Point-In-Time (PIT) and Dimension tables

 

 When changes are just too many When you construct a Point-In-Time (PIT) table or Dimension from your Data Vault model, do you sometimes find yourself in the situation where there are too many change records present? This is because, in the standard Data Vault design, tiny variations when loading data may result in the creation of very small time slices when the various historised data sets (e.g. Satellites) are combined. There is such a thing as too...

0

Updated the Data Vault implementation & automation training for 12-14 June in Germany

 

 On the 12th-14th of June I will be delivering the newly styled and updated Data Vault implementation and automation training together with Doerffler & Partner. I am really looking forward to continue the collaboration after last year’s awesome Data Vault Day (organised by Doerffler as well). Working really hard to wrap up the next layer of virtualisation to discuss there and I’m really excited about it: imagine having multiple versions of not only the Data...

2

When is a change a ‘change’?

 

 This is a post that touches on what I think is one the essential best-practices for ETL design: the ability to process multiple changes for the same key in a single pass. This is specifically relevant for typical ETL processes that load data to a time-variant target (PSA, Satellite, Dimension etc.). For non-time variant targets (Hubs, Links etc.) the process is a bit easier as this is essentially built-in the patterns already :-). In a given...

2

Some insights about … Insights

 

 Can I get some insights, please? Over the years, I have come to somewhat dislike the term ‘insights’ almost to the same level as, say, a ‘Data Lake’. And that’s saying something. Not because these concepts themselves are related that much (they are to some extent, of course). But, because to me personally, they both conjure the same feeling: a mixture of annoyance and desperation. One of the reasons is that since the word ‘insights’...