The following article discusses Crunch’s approach to tracking.
Be assured that Crunch considers the challenges inherent in tracking studies and creates a pathway for tracking studies is a balance of efficiency and error-prevention.
A key principle underlying tracking studies, from a data processing perspective, is that you want all the work you’ve done in previous waves to carry over. No one wants to have to recreate a variable or a dashboard from scratch.
Crunch Automation enables you to replicate and align datasets for append. In this article, we present a 5-step approach to tracking.
Crunch acknowledges that change is inevitable. Category labels change. New questions added to questionnaires. Brands and statements added and subtracting from arrays. The back-end process which merges datasets behind the scenes, is designed to accommodate changes between datasets (eg: the addition of brands in a multiple response), but also to install safeguards (ie: where disparities exist that cannot be automatically reconciled).
Solving the alignment problem
What is alignment? Alignment is making sure that the information in dataset B is compatible with the information in dataset A.
When appending new data with changes, you need to align your data. Aligning data means that the variables are correctly updated in dataset A (the target schema) when you append dataset B (the incoming wave).
Crunch prevents datasets from merging when there are irreconcilable changes. There will be error messages when you try to combine mis-aligned data.
The good news is that the alignment issue is solved by using Crunch Automation.
Wave data: discrete, cumulative or continuous?
Some users work with discrete wave data (either via file upload or direct import). There is a different upload/import for Wave 1, Wave, 2, Wave 3. This is the ideal scenario, and fits best with the process outlined below. Essentially, each wave is uploaded as a separate dataset and then appended to the master dataset (after alignment work takes place).
Other users receive a cumulative data file. So they start with Wave 1 (n=100), and then they’ll get a Wave 1 + Wave 2 file (n=200), and then they’ll get a Wave 1 + 2 + 3 file (n=300), and so forth. Crunch also accommodates this scenario: it just requires an additional set of excluding previous waves before append (so you don’t double-up).
Some users also work with continuous data collection. This is when data is streamed or updated via an integration (with a survey platform, such as Qualtrics or Decipher, or perhaps your proprietary platform). When this happens you don’t have different datasets to append (it’s all just continuously updating a single dataset). Even with continuous data collection, Crunch Automation should be used to define the target schema (wave 1) so that in case the integration breaks, you can follow the general process above.
The General Approach to Tracking
- Import Wave 1 as Dataset A
- Script all changes using Crunch Automation (your script is stored)
- Import Wave 2 as Dataset B
- Tweak your Crunch Automation from Dataset A and run it on Dataset B
- Append Dataset B to Dataset A
The above 5 steps are a general process that works for discrete wave trackers. There a more considerations and nuances to consider, which this article attempts to clarify. There are other scenarios for merging data that don't fit this process perfectly (such as when you have a partial fieldwork export and then you want to just update when end of fieldwork) in which case you may use a different process.
How does Crunch handle adding brands and statements?
Suppose in Wave 1 you have an array (multiple response, categorical array or numeric array) which has 3 subvariables: Coke, Pepsi, Fanta. In Wave 2 you then have a new brand, Sprite in the same array. The append process leaves you with a variable that has 4 subvariables. The fourth subvariable Sprite will be missing for all rows in Wave 1.
How you set this up depends on the type of array you are working with.
If the array was made in Crunch (eg: using a CREATE command), then it’s considered a derived variable. That means when you copy over your Crunch Automation from Wave 1 to Wave 2, you will need to tweak the definition of the variable in Wave 2 to incorporate the new subvariable Sprite. This is typically, but not always the case, when working with SPSS files.
If the array is already defined in the data file or from a direct import (via a Crunch integration) then you won’t have used Crunch Automation in the previous wave to set it up. Then the array is considered a real array and it is not derived. In that case you don’t need to do anything.
In both cases above (real or derived array), the append process automatically takes care of the union of subvariables. That is, it matches the array between dataset A and B based on its parent alias, and then in the append process, adds the new subvariable to the array.
What about dropping brands and statements?
The process is exactly the same as above. Suppose in Wave 3 you drop the Coke subvariable. If it’s a derived array, you tweak the Crunch Automation to remove Coke (if the subvariable it is referring to is gone). If it’s a real array, you don’t need to do anything.
In the append process, you’ll end up with 4 subvariables still. It just means that for Wave 3 there will be missing data for Coke. Can you delete Coke completely from the array? No. You’ll need to redefine a new array (Note: Crunch has the ability to suppress empty rows/columns for analysts, so this may not be necessary).
How to create a weight with different definitions for each wave?
The 5-step process above handles this for you.
In Wave 1, you define a weighting variable, raking Age and Gender.
RAKING (age = xxx, Gender= xx)
AS weight_demo ;
Then in Wave 2, you want to create a weight with a third level Income in addition to Age and Gender. You simply modify the Crunch Automation for Wave 2.
RAKING (age = xxx, Gender= xxm, Income = xxx)
Then in the append process, when Crunch conjoins the variable weight_demo, it will have a different definition for wave 1 than it does for wave 2. This can be inspected in the merged dataset.