Second report: inferential statistics
Your second report will test the hypotheses you have registered in Blackboard. The report you write will be eligible for participating in the Undergraduate Class Project Competition, and I encourage you to participate. I will happily help give you feedback on adapting your report for submission.
You will test your hypotheses on the data you didn’t use to generate your hypotheses. You can pick this subset out by
set.seed(Last4DigitsFromYourStudentIDNumber)
datasubset = dataset[-sample(nrow(dataset))[1:(nrow(dataset)/2)],]
Criteria for C
Your report will…
- … be written in readable correct English, spell-checked.
- … be written in RMarkdown that knits without error.
- … be written by yourself: any substantial outside help has to be clearly marked as such and any citations should follow standard academic citation styles.
- … completely state in full detail both null- and alternative hypothesis for each case.
- … state a confidence level up front.
- … test every registered hypothesis.
- … report test statistics and either p-values or confidence intervals for every test performed. Model details (coefficients) where appropriate.
- … state a judgement on each test: significant or not?
- … interpret the judgement: what does it mean?
- … verify suitability for the tests you chose. This includes checking for normality of data or of residuals, which can be done using a QQ-plot.
- … fully document any additional data manipulation you decided to perform: if your data is very skew, the logarithm could make tests appropriate that previously were not.
- … name each test type you are using as you use it.
Criteria for D
Up to four tasks with minor errors. For example: the report file does not knit, but errors are relatively easy to fix; report has grammatical or spelling errors; etc.
Criteria for F
Not handing a report in on time. Omitting one of the instructed tasks completely. Handing in a report where any knitting errors prove difficult or time-consuming to correct. Handing in a report where four or more of the criteria for C have minor errors.
Criteria for A
To achieve the grade of A, your report will also …
- … be structured as a research paper, with sections
- Introduction and Background: specifying what your dataset is about.
- Data: describe the data related to your tests. This does not need to be as thorough and detailed as the first report, but should specify all variables and subsets that go into the tests.
- Hypotheses: specifying your hypotheses in detail. Pick a significance level for each.
- Methods: specify and describe the analysis methods you have chosen to use. Any time a specific analysis method relies on assumptions, this is where you test these.
- Results: perform your statistical tests. Report summary statistics, p-values, and conclusions for each test.
- … critically compare the tests you chose against possible alternatives, and motivate your choices: what other tests could you have used, and why did you use this one?
- … report both p-values and confidence intervals wherever possible.
- … critically evaluate the weaknesses of your chosen tests: were there tests where the conditions were not completely fulfilled? what arguments are there in favor of or against trusting their results? are the tests known to be robust against the kinds of failures you observed?
First report: descriptive statistics
Your first report will describe your dataset in detail, with special focus on a handful of variables in the data.
Criteria for C
Your report will be readable, written in English, without grammatical or spelling errors. It will be submitted as an RMarkdown file, together with all data files needed to knit the report into a finished text. Your RMarkdown file will run on the lab computers without errors.
Your report will describe general information about the dataset, including:
- Who collected the data?
- When?
- Where?
- Why?
You will include a suggestion of the kinds of questions the data was collected in order to answer. For instance, the Iris data we have looked could be said to have been collected to find methods for determining species for flower specimens.
You will describe the layout of the data:
- How many cases?
- How many variables? What are they? What do they contain? Are they numeric, dates, categorical?
- For each numeric variable give an appropriate measure of center, and an appropriate measure of spread.
- For each categorical variable give an overview of possible values. If there are only a few categories, list all – if there are many categories, tell us how many there are.
- Are there missing values? How are they encoded? How common are they? If there are missing values, include some number (count or percentage) that describes how much is missing in your description of each variable.
You will pick a handful of variables for more careful study. You should pick no more than 5 each of numeric, categorical and date/time-like variables. Fewer is fine, and if you don’t have any dates or timestamps, you clearly will not pick any of those.
For each of the picked variables, produce detailed descriptions of their distributions. Include an appropriate plot to describe how the values of the variable distributes. Where appropriate evaluate whether you think variables have a normal distribution. Explain your reasoning.
For each pair of picked variables, describe how they relate to each other. Where appropriate, produce plots, correlations, two-way tables.
Criteria for D
Up to four tasks with minor errors. For example: the report file does not knit, but errors are relatively easy to fix; report has grammatical or spelling errors; etc.
Criteria for F
Not handing a report in on time. Omitting one of the instructed tasks completely. Handing in a report where any knitting errors prove difficult or time-consuming to correct. Handing in a report where four or more of the criteria for C have minor errors.
More criteria may be added if needed.
Criteria for A
Include, fit and comprehensively describe models for all variable relationships. Give plots, coefficients, proportions of explained variances and a written and explained evaluation of the quality of each model.
Include a discussion that compares and contrasts possible presentations for your data: what options were you choosing between for plotting, for choosing measures of spread and center, and why did you pick the ones you used?
Include a critique of the original data collection. Given your understanding of Why the data was collected, does the data support learning about the questions you believe it was collected to answer? What would you like to see included in the dataset to better respond to these questions – leave feasibility aside: what is your wish list for extending the data?