Note that jtools package has plotting capabilities for interactions that include a continuous variable, and categorical x categorical interactions.
I do research on brand strategy using a multi-method approach that involves analyzing secondary data and running control experiments. I examine brands as social entities that are subject to norms, with the goal of identifying optimal marketing communications strategies. I am very interesting in R programming and I'm currently doing a lot of stuff with the Twitter API.
|T. Bettina Cornwell||Authenticity in horizontal marketing partnerships: A better measure of brand compatibility||Journal of Business Research 100 (2019) 279-298||Academia.edu
|Joshua T. Beck and Joshua J. Clarkson||Feeling Left Out? Political Extremism
and Normalizing Consumption
|Preparing manuscript for submission to JCR|
|T. Bettina Cornwell||Brand Latitude: A Conceptual Model||Defending dissertation in late May 2019|
|T. Bettina Cornwell||What Gives Brands Latitude to Engage in Advocacy?||Defending dissertation in Late May 2019|
|Scott Cowley||Examining Brand Heterogeneity Among Brand Social Media Responses to Community Positivity||Presented at Winter AMA 2019|
So, apparently the antivirus software on my work computer (which I have no control over) is equipped with "real time file system protection." I don't really know what that is, but I know that it often prevents me from installing R packages on my machine, which in turn makes it very difficult to get my analysis done. I don't know how many people have the same problem, but I'm posting it here for my reference.
I think it's fairly well-established that the more predictors you add the model, the higher the R2 (most of the time). Therefore, it can be tricky to compare dueling latent variables with unevequal numbers of indicators. It's not clear to me, however, that aggregating across the variables solves this problem. It seems to persist even when a single average value is used to represent all of the latent variable's indicators.
The challenge with Qualtrics is that the files tend to have two top rows: variable names and variable descriptions. This is not a handy format for future data manipulation. Here is the code that I use to import the data without that row of descriptions.
Centering and scaling predictors
This seems to be no longer necessary, now that jtools has a function called "summ" that allows you to get different output than the old-school "summary" call. These outputs include centering or scaling predictors, confidence intervals, and robust standard errors.
A really, really cool feature of Qualtrics that is not well-documented is its web service. If you're not familiar with Qualtrics web service, it basically allows you to bring in dynamic content from an external website. For example, names, prices, etc. An example of this is Qualtrics' own random number generator: http://reporting.qualtrics.com/projects/randomNumGen.php. It creates random values that you can then insert into embedded data fields for use in your survey instrument.
Check out my R Club post: Bringing in Qualtrics (and other data)
I've decided to automate one of the most tedious processes that we do here.
This R code combines data from our Qualtrics studies with demographics. Basically, it removes duplicate demos, eliminates students without student ID, ie, "test", merges the two datasets, then outputs the data as a CSV file so that you can work with it in Excel or SPSS. The required inputs are the names of the two files (demos and study data) and the directory they are stored in. It outputs the combined file to the same directory.
Yates's or no Yates's? Pearson's chi-squared versus Fisher's Exact test? I look at the chi-squared test from a previous post and attempt to discover which method is the best at determining independence of two groups.
I am presenting to R Club today on predictive modeling, a subject that I've been really interested in. It was quite a bit of work to prepare for it, but I feel like I got a lot out of it.
Yesterday when I presented my Facebook Ghost Towns idea to the other PhD students (and Bob), someone brought up an important point. I think it may have been Jeffrey Xie. I know the consensus is that the ghost town effect is caused by click fraud, but I tend to prefer a less sinister explanation. [Read more…]
This is hopefully what I'll be working on for my firs-year PhD project. I'm presenting it to my peers tomorrow at lab meeting. [Read more…]
In R Club, we worked on making maps. Here is mine: [Read more…]
This is for a class I taught…
I just added some good information about data wrangling to the R Club blog.