%%{init:{'theme':'forest', 'flowchart':{'nodeSpacing': 20, 'rankSpacing': 30}}}%% graph LR Sunlight --> Growth Water --> Growth Sunlight --- Water linkStyle 0,1,2 stroke-width:1px;
If you find any typos, errors, or places where the text may be improved, please let us know by providing feedback either in the feedback survey (given during class) or by using GitHub.
On GitHub open an issue or submit a pull request by clicking the " Edit this page" link at the side of this page.
Running statistical analyses is a relatively methodical and well-defined process, even if there is often a lot of trial and error involved. Sometimes it may feel overwhelming and complicated, which it definitely can be, but it doesn’t have to be. By taking a structured approach to running statistical analyses, you can make it easier on yourself and feel more in control.
In R, statistical methods are often created and developed by researchers with little to no training in software development and who often use them differently. This has some advantages, like having the cutting edge statistical methods available to us, but has a major disadvantage of often having to learn a completely different way of running a statistical analysis, even if it is fairly similar to ones you’ve used before. So having a framework for running statistical analyses, regardless of who created them, can provide that needed structure and vastly simplify the analysis. This session will be covering a general framework for running statistical analyses, regardless of the exact statistical method.
The overall objective for this session is to:
More specific objectives are to:
Specific “anti”-objectives:
We will be making a lot of function throughout this session and the next. This is just a fair warning!
Time: ~8 minutes.
In science and especially statistics, we talk a lot about “models”. But what does model actually mean? What different types of definitions can you think of? Is there a different understanding of model in statistics compared to other areas?
Going back to our own lipidomics
dataset, we need to do the first step: Creating the question. While we don’t have much data, there are a surprising number of questions we could ask. But we will keep it very simple, very basic, and very exploratory.
Next, because we are working within a “reproducible analysis” framework specifically with the use of targets, let’s convert these questions in outputs to include as pipeline targets, along with a basic idea for the final functions that will make up these targets and their inputs and outputs. These targets will probably be quite different by the end, but it’s a good start to think about what it should look like in the end.
Potential function names might be:
calculate_estimates()
calculate_variation()
plot_estimates()
plot_variation()
%%{init:{'theme':'forest', 'flowchart':{'nodeSpacing': 20, 'rankSpacing': 30}}}%% graph TB lipidomics -- "calculate_estimates()" --> model_est[Model estimates] model_est -- "plot_estimates()" --> plot_est[Plot of estimates] lipidomics -- "calculate_variation()" --> model_var[Model variation] model_var -- "plot_variation()" --> plot_var[Plot of variation] plot_est & plot_var -- "tar_read()" --> rmd[R Markdown] linkStyle 0,1,2,3,4,5 stroke-width:1px;
Now that we’ve talked about the workflow around making models and have already written out some research questions, let’s make a basic, graphical theoretical model:
%%{init:{'theme':'forest', 'flowchart':{'nodeSpacing': 20, 'rankSpacing':30}, 'themeVariables': { 'edgeLabelBackground': 'transparent'}}}%% graph TB Metabolite --> T1D Age & Gender --> T1D & Metabolite linkStyle 0,1,2,3,4 stroke-width:1px;
Or mathematically:
\[T1D = metabolite + age + gender\]
So, T1D status (or class
in the lipidomics
dataset) is our outcome and the individual metabolite, age, and gender are our predictors. Technically, age and gender would be “confounders” or “covariates”, since we include them only because we think they influence the relationship between the metabolite and T1D.
If we convert the formula into a form with the variables we have in the dataset as well as selecting only one metabolite for now (the cholesterol metabolite, which we add “metabolite” to differentiate it from other potential variables), it would be:
\[class = metabolite\_cholesterol + age + gender\]
Now that we have a theoretical model, we need to choose our model type. Since T1D is binary (either you have it or you don’t), the most likely choice is logistic regression, which requires a binary outcome variable. So we have the theoretical model and the type of model to use - how do we express this as code in R? There are many ways of doing the same thing in R, but some are a bit easier than others. One such approach, that is quite generic and fits with the ideals of the tidyverse, is a similar universe of packages called the tidymodels.
Why do we teach tidymodels? Because they are built by software developers, employed by RStudio (who also employs the people who build the tidyverse), and they have a strong reputation for writing good documentation. Plus, the tidymodels set of packages also make creating and using models quite generic, so by teaching you these sets of tools, you can relatively easily change the model type, or how you process the data, or other specifications without having to learn a whole new package or set of tools.
The reason tidymodels can do that is because they designed it in a way that makes a clear separation in the components of the model building workflow that was described above, through the use of specific packages for each component.
Package | Description |
---|---|
parsnip | Model definition, such as type (e.g. linear_reg() ) and “engine” (e.g. glm() ). |
recipes | Model-specific data transformations, such as removing missing values, or standardizing the data. |
workflows | Combining model definition, data, and transformations to calculate the estimates and uncertainty. |
We’ll start with the parsnip package. Functions in this package are used to set the details of the model you want to use. Specifically, functions to indicate the model “type” (e.g. linear regression) and the set_engines()
function to determine the “engine” to run the type (which R-based algorithm to use, like glm()
compared to lm()
). Check out the Examples page for code you might use depending on the model you want. The most commonly used model types would be linear_reg()
, logistic_reg()
, and multinom_reg()
.
We want to use logistic regression. So, open the doc/lesson.Rmd
file and in the setup
code chunk add library(tidymodels)
, so it looks like:
```{r setup}
::tar_config_set(store = here::here("_targets"))
targetslibrary(tidyverse)
library(tidymodels)
source(here::here("R/functions.R"))
<- tar_read(lipidomics)
lipidomics ```
Then, delete everything below this code chunk.
Since we will be using tidymodels, we need to install it, as well as explicitly add the parsnip, recipes, and workflows packages. Like tidyverse, we need to set tidymodels differently because it is a “meta-package”. We might need to force installing it with install.packages("tidymodels")
so renv recognizes it.
use_package("tidymodels", "depends")
# install.packages("tidymodels")
use_package("parsnip")
use_package("recipes")
use_package("workflows")
Before continuing, let’s commit the changes to the Git history. Next, in the doc/lesson.Rmd
file, on the bottom of the document create a new header and code chunk:
## Building the model
```{r}
```
In the new code chunk, we will set up the model specs:
log_reg_specs <- logistic_reg() %>%
set_engine("glm")
log_reg_specs
Logistic Regression Model Specification (classification)
Computational engine: glm
Running this on it’s own doesn’t show much, as you can see. But we’ve now set the model we want to use.
Time: ~10 minutes.
Using parsnip’s “Examples” vignette as well as the code we wrote for the logistic regression above as a template, write parsnip code that would define a simple (an engine of"lm"
) linear regression model. Begin by making a new Markdown header and code chunk at the bottom of the doc/lesson.Rmd
file, like listed below:
## Exercises
### Linear regression model definition
```{r}
```
After writing the code, run styler (Ctrl-Shift-P
, then type “style file”). We will eventually delete these exercise text in the R Markdown file, but for now, commit the changes to the Git history.
linear_reg_specs <- linear_reg() %>%
set_engine("lm")
Setting the model type was pretty easy right? The more difficult part comes next with the data transformations. recipes functions are almost entirely used to apply transformations that a model might specifically need, like mean-centering, removing missing values, and other aspects of data processing.
Let’s consider our lipidomics
dataset. In order for us to start our statistical analysis, we need the data to be structured in a certain way to be able to smoothly use it as input in our model. We have at least three easy observations on necessary transformations of the data, two of which ca be fixed with a single tidyr function, while the third one can be fixed with recipes. Can you spot them?
lipidomics
# A tibble: 504 × 6
code gender age class metabolite value
<chr> <chr> <dbl> <chr> <chr> <dbl>
1 ERI109 M 25 CT TMS (interntal standard) 208.
2 ERI109 M 25 CT Cholesterol 19.8
3 ERI109 M 25 CT Lipid CH3- 1 44.1
4 ERI109 M 25 CT Lipid CH3- 2 147.
5 ERI109 M 25 CT Cholesterol 27.2
6 ERI109 M 25 CT Lipid -CH2- 587.
7 ERI109 M 25 CT FA -CH2CH2COO- 31.6
8 ERI109 M 25 CT PUFA 29.0
9 ERI109 M 25 CT Phosphatidylethanolamine 6.78
10 ERI109 M 25 CT Phosphatidycholine 41.7
# … with 494 more rows
The first observation isn’t always an issue and depends heavily on the model type you use. Since we are using logistic regression, the model assumes that each row is an individual person. But our data is in the long format, so each person has multiple rows. The second observation is that there seems to be a data input error, since there are three Cholesterol
values, while all other metabolites only have one:
# A tibble: 36 × 3
code metabolite n
<chr> <chr> <int>
1 ERI109 Cholesterol 3
2 ERI111 Cholesterol 3
3 ERI140 Cholesterol 3
4 ERI142 Cholesterol 3
5 ERI143 Cholesterol 3
6 ERI144 Cholesterol 3
7 ERI145 Cholesterol 3
8 ERI146 Cholesterol 3
9 ERI147 Cholesterol 3
10 ERI149 Cholesterol 3
# … with 26 more rows
We can fix both the long format and multiple cholesterol issues by using tidyr::pivot_wider()
. Before we do, the last issue is that each metabolite has quite large differences in the values and ranges of data. Again, whether this is an issue depends on what we want to do, but in our research question we want to know how each metabolite influences T1D. In order to best interpret the results and compare across metabolites, we should ideally have all the metabolites with a similar range and distribution of values.
Let’s fix the first two issues first. While we probably only need to use pivot_wider()
, we should probably first tidy up the metabolite names first so they make better column names. We do that by combining mutate()
with snakecase::to_snake_case()
. In the doc/lesson.Rmd
file, we rename the metabolite names before using pivot_wider()
. Since we want an easy way of identifying columns that are metabolites, we will add a "metabolite_"
prefix using the argument names_prefix
. To actually fix the multiple cholesterol issue, we should look more into the data documentation or contact the authors. But for this course, we will merge the values by calculating a mean before pivoting. We do this by setting the values_fn
with mean
.
lipidomics_wide <- lipidomics %>%
mutate(metabolite = snakecase::to_snake_case(metabolite)) %>%
pivot_wider(
names_from = metabolite,
values_from = value,
values_fn = mean,
names_prefix = "metabolite_"
)
lipidomics_wide
# A tibble: 36 × 16
code gender age class metab…¹ metab…² metab…³ metab…⁴ metab…⁵
<chr> <chr> <dbl> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
1 ERI109 M 25 CT 208. 18.6 44.1 147. 587.
2 ERI111 M 39 CT 219. 20.8 28.1 153. 585.
3 ERI163 W 58 CT 57.1 15.5 75.1 144. 558.
4 ERI375 M 24 CT 19.2 10.2 22.0 220. 606.
5 ERI376 M 26 CT 35.4 13.5 29.5 282. 554.
6 ERI391 M 31 CT 30.4 9.53 38.0 220. 597.
7 ERI392 M 24 CT 21.7 9.87 34.8 215. 607.
8 ERI79 W 26 CT 185. 17.6 109. 153. 546.
9 ERI81 M 52 CT 207. 17.0 49.6 150. 593.
10 ERI83 M 25 CT 322. 19.7 29.9 153. 606.
# … with 26 more rows, 7 more variables:
# metabolite_fa_ch_2_ch_2_coo <dbl>, metabolite_pufa <dbl>,
# metabolite_phosphatidylethanolamine <dbl>,
# metabolite_phosphatidycholine <dbl>,
# metabolite_phospholipids <dbl>, metabolite_mufa_pufa <dbl>,
# metabolite_cd_cl_3_solvent <dbl>, and abbreviated variable
# names ¹metabolite_tms_interntal_standard, …
Since we’re using a function-oriented workflow and since we will be using this code again later on, let’s convert both the “metabolite to snakecase” and “pivot to wider” code into their own functions, before moving them over into the R/functions.R
file.
metabolite_values_to_snakecase <- function(data) {
data %>%
dplyr::mutate(metabolite = snakecase::to_snake_case(metabolite))
}
lipidomics %>%
metabolite_values_to_snakecase()
# A tibble: 504 × 6
code gender age class metabolite value
<chr> <chr> <dbl> <chr> <chr> <dbl>
1 ERI109 M 25 CT tms_interntal_standard 208.
2 ERI109 M 25 CT cholesterol 19.8
3 ERI109 M 25 CT lipid_ch_3_1 44.1
4 ERI109 M 25 CT lipid_ch_3_2 147.
5 ERI109 M 25 CT cholesterol 27.2
6 ERI109 M 25 CT lipid_ch_2 587.
7 ERI109 M 25 CT fa_ch_2_ch_2_coo 31.6
8 ERI109 M 25 CT pufa 29.0
9 ERI109 M 25 CT phosphatidylethanolamine 6.78
10 ERI109 M 25 CT phosphatidycholine 41.7
# … with 494 more rows
This on its own should work. However, the column we want to change might not always be called metabolite
, or we might want to change it later. So, to make this function a bit more generic, we can use something called “curly-curly” (it looks like {{}}
when used) and “non-standard evaluation” (NSE).
We can use curly-curly (combined with across()
) to apply snakecase::to_snake_case()
to columns of our choice.
column_values_to_snakecase <- function(data, cols) {
data %>%
dplyr::mutate(dplyr::across({{ cols }}, snakecase::to_snake_case))
}
lipidomics %>%
column_values_to_snakecase(metabolite)
# A tibble: 504 × 6
code gender age class metabolite value
<chr> <chr> <dbl> <chr> <chr> <dbl>
1 ERI109 M 25 CT tms_interntal_standard 208.
2 ERI109 M 25 CT cholesterol 19.8
3 ERI109 M 25 CT lipid_ch_3_1 44.1
4 ERI109 M 25 CT lipid_ch_3_2 147.
5 ERI109 M 25 CT cholesterol 27.2
6 ERI109 M 25 CT lipid_ch_2 587.
7 ERI109 M 25 CT fa_ch_2_ch_2_coo 31.6
8 ERI109 M 25 CT pufa 29.0
9 ERI109 M 25 CT phosphatidylethanolamine 6.78
10 ERI109 M 25 CT phosphatidycholine 41.7
# … with 494 more rows
Move this new function over into the R/functions.R
file, add Roxygen comments (have the cursor inside the function, type Ctrl-Shift-P
, then type “roxygen”), run styler (Ctrl-Shift-P
, then type “style file”), source()
the modified R/functions.R
file, and add the new function above the pivot_wider()
code in the doc/lessons.Rmd
file.
#' Convert column value strings into snakecase.
#'
#' @param data Data with string columns.
#' @param cols The column to convert into snakecase.
#'
#' @return A data frame.
#'
column_values_to_snakecase <- function(data, cols) {
data %>%
dplyr::mutate(dplyr::across({{ cols }}, snakecase::to_snake_case))
}
Time: ~10 minutes.
Just like with the mutate()
, take the pivot_wider()
code and convert it into a new function.
metabolites_to_wider
.function()
: data
and values_fn
. Set the default for values_fn
to be mean
. We add this argument in case we want to merge the duplicate cholesterol variables with something other than the mean.data %>%
at the beginning, like we did with the column_values_to_snakecase()
. Inside the pivot_wider()
code, replace values_fn = mean
with values_fn = values_fn
.tidyr::
before the pivot_wider()
function.Ctrl-Shift-P
, then type “roxygen”).R/functions.R
file.doc/lesson.Rmd
file to make use of the new functions.#' Convert the metabolite long format into a wider one.
#'
#' @param data The lipidomics dataset.
#' @param values_fn A function to summarize the multiple cholesterol values.
#'
#' @return A wide data frame.
#'
metabolites_to_wider <- function(data, values_fn = mean) {
data %>%
tidyr::pivot_wider(
names_from = metabolite,
values_from = value,
values_fn = values_fn,
names_prefix = "metabolite_"
)
}
We’ve used dplyr and tidyr to start fixing some of the issues with the data. But we still have the third issue: How to make the results between metabolites comparable. That’s where we use recipes.
The first function is recipe()
and it takes two forms: with or without a formula. Remember the model formula we mentioned previously? Well, here is where we can use it to tell recipes about the model formula we intend to use so it knows on what variables to apply the chosen transformations.
For a few reasons that will be clearer later, we won’t ultimately use the formula form of recipe()
, but will show how it works. The
recipe(class ~ metabolite_cholesterol + age + gender, data = lipidomics_wide)
Recipe
Inputs:
role #variables
outcome 1
predictor 3
The alternative approach is to set “roles” using update_roles()
. Instead of using a formula and letting recipe()
infer the outcome and predictors, we can explicitly select which variables are which. This has some nice features that we will use later on.
recipe(lipidomics_wide) %>%
update_role(metabolite_lipid_ch_3_1, age, gender, new_role = "predictor") %>%
update_role(class, new_role = "outcome")
Recipe
Inputs:
role #variables
outcome 1
predictor 3
12 variables with undeclared roles
The next “step” is to select a transformation function.
Time: ~15 minutes.
Look at the list of step_*
functions below and use the ?
or F1 (while having the cursor on the function name) to access the help documentation. Consider the metabolite data in the lipidomics
dataset. Which of these transformations might you use?
recipes::step_log()
recipes::step_scale()
recipes::step_normalize()
recipes::step_center()
recipes::step_sqrt()
step_
transformation you might use for the numeric metabolite data.There are so many useful transformation functions available. For instance, if you often have to impute data, there are functions for that. You can check them out in the Console by typing recipes::step_impute_
then hit the Tab key to see a list of them. Or, if you have some missing values, there’s also the recipes::step_naomit()
.
The step_
function we use in the text of this website in later sections may be different from what you decide on in your group and in the class as a whole. There isn’t strictly a “right” answer here, since it would ultimately require domain expertise in both lipidomic quantification and statistical analysis of -omic data. But we ultimate need to show and use something in the text.
You will use whichever transformation function your group decided on in the exercise above, but for the text of this website, we will use step_normalize()
. This function is useful because it makes each variable centered to zero and a value of 1 unit is translated to 1 standard deviation of the original distribution. This means we can more easily compare values between variables. If we add this to the end of the recipe:
recipe(lipidomics_wide) %>%
update_role(metabolite_lipid_ch_3_1, age, gender, new_role = "predictor") %>%
update_role(class, new_role = "outcome") %>%
step_normalize(starts_with("metabolite_"))
Recipe
Inputs:
role #variables
outcome 1
predictor 3
12 variables with undeclared roles
Operations:
Centering and scaling for starts_with("metabolite_")
Next thing to do is convert this into a function, using the same workflow we’ve been using (which means this needs to be in the R/functions.R
script). We’ll also use the curly-curly again, since we might use a different metabolite later. Note, when adding all the packagename::
to each function, the starts_with()
function comes from the tidyselect package.
#' A transformation recipe to pre-process the data.
#'
#' @param data The lipidomics dataset.
#' @param metabolite_variable The column of the metabolite variable.
#'
#' @return
#'
create_recipe_spec <- function(data, metabolite_variable) {
recipes::recipe(data) %>%
recipes::update_role({{ metabolite_variable }}, age, gender, new_role = "predictor") %>%
recipes::update_role(class, new_role = "outcome") %>%
recipes::step_normalize(tidyselect::starts_with("metabolite_"))
}
And test it out:
recipe_specs <- lipidomics_wide %>%
create_recipe_spec(metabolite_cholesterol)
recipe_specs
Recipe
Inputs:
role #variables
outcome 1
predictor 3
12 variables with undeclared roles
Operations:
Centering and scaling for tidyselect::starts_with("metabolite_")
Run styler (Ctrl-Shift-P
, then type “style file”), then commit the changes made to the Git history.
We’ve now defined the model we want to use and created a transformation recipes specification. Now we can start putting them together and finally fit them to the data. This is done with the workflows package.
Why use this package, rather than simply run the statistical analysis and process the data as normal? When running multiple models (like we will do in the next section) that may require different data structures, the data transformation steps have to happen right before the data is fit to the model and need to be done on exactly the data used by the model. So if we have one data frame that we run multiple models on, but the transformation happens to the whole data frame, we could end up with issues due to how the transformations were applied. The workflows package keeps track of those things for us, so we can focus on the higher level thinking rather than on the small details of running the models.
The workflows package has a few main functions for combining the recipe with the model specs, as well as several for updating an existing workflow (which might be useful if you need to run many models of slightly different types). All model workflows need to start with workflow()
, followed by two main functions: add_model()
and add_recipe()
. Can you guess what they do?
workflow() %>%
add_model(log_reg_specs) %>%
add_recipe(recipe_specs)
══ Workflow ════════════════════════════════════════════════════════
Preprocessor: Recipe
Model: logistic_reg()
── Preprocessor ────────────────────────────────────────────────────
1 Recipe Step
• step_normalize()
── Model ───────────────────────────────────────────────────────────
Logistic Regression Model Specification (classification)
Computational engine: glm
While this code is already pretty concise, let’s convert it into a function to make it simplified. We’ll use the same function-oriented workflow that we’ve used before, where the function should ultimately be inside the R/functions.R
file.
#' Create a workflow object of the model and transformations.
#'
#' @param model_specs The model specs
#' @param recipe_specs The recipe specs
#'
#' @return A workflow object
#'
create_model_workflow <- function(model_specs, recipe_specs) {
workflows::workflow() %>%
workflows::add_model(model_specs) %>%
workflows::add_recipe(recipe_specs)
}
Instead of using the previously created objects, let’s start the model creation from scratch:
model_workflow <- create_model_workflow(
logistic_reg() %>%
set_engine("glm"),
lipidomics_wide %>%
create_recipe_spec(metabolite_cholesterol)
)
model_workflow
══ Workflow ════════════════════════════════════════════════════════
Preprocessor: Recipe
Model: logistic_reg()
── Preprocessor ────────────────────────────────────────────────────
1 Recipe Step
• step_normalize()
── Model ───────────────────────────────────────────────────────────
Logistic Regression Model Specification (classification)
Computational engine: glm
Now, we can do the final thing: Fitting the data to the model with fit()
! 🙌
══ Workflow [trained] ══════════════════════════════════════════════
Preprocessor: Recipe
Model: logistic_reg()
── Preprocessor ────────────────────────────────────────────────────
1 Recipe Step
• step_normalize()
── Model ───────────────────────────────────────────────────────────
Call: stats::glm(formula = ..y ~ ., family = stats::binomial, data = data)
Coefficients:
(Intercept) genderW
0.1053 -0.7066
age metabolite_cholesterol
0.0069 1.0880
Degrees of Freedom: 35 Total (i.e. Null); 32 Residual
Null Deviance: 49.9
Residual Deviance: 42.8 AIC: 50.8
This gives us a lot of information, but what we are mostly interested in is the model estimates themselves. While this fitted_model
object contains a lot additional information inside, workflows thankfully has a function to extract the information we want. In this case, it is the extract_fit_parsnip()
function.
fitted_model %>%
extract_fit_parsnip()
parsnip model object
Call: stats::glm(formula = ..y ~ ., family = stats::binomial, data = data)
Coefficients:
(Intercept) genderW
0.1053 -0.7066
age metabolite_cholesterol
0.0069 1.0880
Degrees of Freedom: 35 Total (i.e. Null); 32 Residual
Null Deviance: 49.9
Residual Deviance: 42.8 AIC: 50.8
To get this information in a tidier format, we use another function: tidy()
. This function comes from the broom package, which is part of the tidymodels. But we should explicitly add it to the dependencies:
use_package("broom")
Then, we add the tidy()
function to our model using the %>%
pipe. Since we are using a logistic regression model, we need to consider how we want the estimates to be presented, probably depending on how we want to visualize our results. If we set exponentiate = TRUE
in tidy()
, the output estimates will be odds ratios, if we set exponentiate = FALSE
, we will get the log odds ratios or the beta coefficient. Here we choose exponentiate = TRUE
:
fitted_model %>%
extract_fit_parsnip() %>%
tidy(exponentiate = TRUE)
# A tibble: 4 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 1.11 1.29 0.0817 0.935
2 genderW 0.493 0.779 -0.907 0.365
3 age 1.01 0.0377 0.183 0.855
4 metabolite_cholesterol 2.97 0.458 2.38 0.0175
We now have a data frame of our model results! Like we did with the workflows()
code that we converted into a function, we do the same thing here: Make another function (and move it to R/functions.R
)! 😛
#' Create a tidy output of the model results.
#'
#' @param workflow_fitted_model The model workflow object that has been fitted.
#'
#' @return A data frame.
#'
tidy_model_output <- function(workflow_fitted_model) {
workflow_fitted_model %>%
workflows::extract_fit_parsnip() %>%
broom::tidy(exponentiate = TRUE)
}
Replacing the code in the doc/lesson.Rmd
file to use the function.
fitted_model %>%
tidy_model_output()
# A tibble: 4 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 1.11 1.29 0.0817 0.935
2 genderW 0.493 0.779 -0.907 0.365
3 age 1.01 0.0377 0.183 0.855
4 metabolite_cholesterol 2.97 0.458 2.38 0.0175
If we revise the code so it is one pipe, it would look like:
create_model_workflow(
logistic_reg() %>%
set_engine("glm"),
lipidomics_wide %>%
create_recipe_spec(metabolite_cholesterol)
) %>%
fit(lipidomics_wide) %>%
tidy_model_output()
# A tibble: 4 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 1.11 1.29 0.0817 0.935
2 genderW 0.493 0.779 -0.907 0.365
3 age 1.01 0.0377 0.183 0.855
4 metabolite_cholesterol 2.97 0.458 2.38 0.0175
Let’s briefly cover what these columns and values mean.
Before ending, open the Git interface and commit the changes you made. Then push your changes up to GitHub.
logistic_reg()
for logistic regression, and set the computational “engine” with set_engine()
.update_roles()
and adding transformation steps using any of the dozen different step_
functions.workflow()
that combines the defined model with add_model()
, the transformation steps with add_recipe()
, and the data with fit()
.tidy()
the model output, extracted using extract_fit_parsnip()
to get a data frame of the estimates and standard error for the variables in the model.