Assessing consumers’ survey engagement through ordering and time effects in discrete choice experiments: a hybrid model approach
International Food Marketing Research Symposium 2022
Autore/i: Cubero Dudinskaya, Emilia; Naspetti, Simona; Zanoli, Raffaele
Classificazione: 4 Contributo in Atti di Convegno (Proceeding)
Abstract: Discrete choice experiments (DCEs) conducted online are widely used to study consumers’ food preferences, especially when the behaviour of interest involves discrete or qualitative choices (Louviere et al., 2008). DCEs are based on the Lancastrian consumer theory (Lancaster, 1966) and the random utility model (RUM) framework (Mcfadden, 1974). DCEs simulate a trading market in which consumers are presented with a set of choice sets configured by various products with different combinations of attributes. Choice sets configurations are estimated a priori according to unbiased and efficient principles of statistical estimation. Consumers are requested to state their preferred alternative in each choice set. Based on this information, researchers can estimate the effects of the attributes on consumers’ preferences and choices. Although previous research established important advantages of using DCEs to collect data on consumers’ food preferences (Byun et al., 2018; Schlereth et al., 2012; Telser & Zweifel, 2009), significant disadvantages have also been highlighted (Lindhjem & Navrud, 2011). DCEs often rely on surveys conducted online, in which prerecruited online panels composed of semi-professional and recurrent respondents frequently exhibit a strategic time-saving behaviour and rush through the questionnaire without carefully considering their choices (Börger, 2016; Campbell et al., 2018; Schwappach & Strasmann, 2006). Such behaviour compromises the quality and the validity of the collected data (Malhotra et al., 2008). Consumers’ engagement with the survey is especially relevant given the increasing reliance on data collected through online surveys,where few controls are available to guarantee consumers’ attention and engagement (Hess & Stathopoulos, 2013). Moreover, consumers’ incentives for participating in online studies are frequently directly linked to the time taken to answer the surveys. Response time might also be highly correlated with any decision heuristic (Campbell et al., 2018). Earlier research suggests using response time as a proxy for respondents’ cognitive effort and engagement with the survey (Campbell et al., 2016; Rose & Black, 2006). Nevertheless, measuring survey engagement is challenging. Relying on proxies as explanators of scale heterogeneity can lead to endogeneity bias. As a result, previous studies suggest using a hybrid discrete choice modelling approach, as it allows the incorporation of consumers’ survey engagement as a latent variable in the DCE (Hess & Stathopoulos, 2013). Hess and Stathopoulos (2013) operationalized survey engagement as a combination of survey time measures and additional questions to respondents regarding their survey engagement. Nevertheless, several limitations arise. First, asking respondents additional questions implies a more extended survey and higher response fatigue. Second, as highlighted by the authors, considering response time for the entire survey or as one measure for the complete choice experiment is a limitation as it ignores the individual time taken in each choice task. When examining the gain-loss asymmetry in stated choice experiments, Börjesson and Fosgerau (2015) analyzed the time per choice task. However, no previous research has analyzed the relationship between the time per choice task and consumers’ survey engagement. Moreover, the time spent on each choice task also requires considering the order in which each choice task is displayed. Previous research suggests that consumers exhibit relatively unstable preferences over the sequence of choice tasks (Nguyen et al., 2021). Based on the limitations mentioned, the present research question is: what is the link between the length of time spent by consumers on each choice task and the order in which each choice task is displayed with consumers’ survey engagement? To answer the research question, the authors tackle the limitations of previous studies in the following way. First, to address the endogeneity bias,the research followed Hess and Stathopoulos (2013) to implement a hybrid model structure (Ben-Akiva et al., 2002), treating the consumers’ level of engagement as a latent variable. Second, the latent variable is specified considering the response time per each choice task for each respondent. Third, the authors also incorporate into the latent variable the order effect in which each choice task was displayed to each consumer to account for consumers’ unstable preferences (Nguyen et al., 2021). As a result, the current study contributes to consumer research by proposing a hybrid model in which consumers’ survey engagement is operationalized as a latent variable that explains scale heterogeneity without leading to higher response fatigue while also accounting for choice display ordering effects. We applied the model to a dataset of stated choices of tomato purée. The stated choice experiment presented respondents with twelve choice sets with two options of tomato purée and a no-choice. Each alternative was described by four attributes: tomato seeds type, vitamin-C natural content, product origin and price. The type of seeds, vitamin C content and origin, were specified to vary randomly across respondents and following a normal distribution. The attributes varied according to a D-efficient design (D-error = 0.57) estimated using the Ngene software with priors from a pilot survey. Three sociodemographic interactions (gender and frequency of purchase of organic products) were also incorporated as shifts in the means of the estimated parameters. The data was collected in September 2020 in eleven countries: Denmark, France, Germany, Hungary, Italy, Latvia, Netherlands, Slovenia, Spain, Switzerland and the United Kingdom. A total of 4.208 usable responses were collected, yielding 50.496 observations. Results from the hybrid choice model show that the parameters were statically significant for all attributes. The latent variable was positive and significant, although a lower value for the latent variable was observable for respondents under 34 years old. Results from the measurement model show that increases in the latent variable are associated with a higher probability of increases in the time taken to familiarise with the attributes, the cheap talk, the label definitions, and each choice task. Respondents with a more positive value for the latent engagement variable are more likely to takelonger to complete the survey, possibly confirming the hypothesis of their higher concentration on the task (Börger, 2016). Moreover, by separating the full time taken to complete the survey into the time taken to complete different parts of the choice task, it is possible to observe that the time to complete each choice task presents the highest estimate. This result highlights the key role of the time taken per choice task over the total time of the survey or the DCE. Future research should include the time per choice task in their analysis and avoid using proxies that do not have the same weight. Regarding the role of the ordering effects, the results support previous research in which consumers exhibit relatively unstable preferences over the sequence of choice tasks (Nguyen et al., 2021). Specifically, for the first two choice sets displayed to any respondent, a lower level of consumers’ survey engagement was observed. This result is in line with previous literature in which very high error variances were found for the first two choice sets (Carlsson et al., 2012). According to the authors, such results suggest learning effects. This means that it might be wise to include one or two additional practice choice tasks at the beginning of the choice experiments so that consumers can familiarize themselves with the task. Several limitations arise in the current study. First, as the response time is measured on a click by click basis, there is no way to ensure that the consumer was not exposed to any distractions or multitasking when compiling the choice experiment. Second, all response times are included as a continuous variable, not allowing the possibility to warrant for consumers who require significantly more time because of being faced with other distractions (Campbell et al., 2018). The authors believe the study should be extended by including the response times as a categorical value to identify if too prolonged response times might be a sign of less engaged consumers. Third, although this study includes different national settings, it is important to highlight that it examined only one empirical context: a specific product’s food choice. The authors believe that the impact of response time and the ordering effects introduced in this studyshould be of wider interest. However, as there is no certainty that consumers’ decision processes won’t change among diverse products, future research should inspect a wider variety of settings and choice contexts. Finally, further research is necessary to analyze possible diverse outcomes and address the role of extreme response times and their effect on the parameter estimates.
Scheda della pubblicazione: https://iris.univpm.it/handle/11566/344553