Chapter 11 Road to (Assignment) Submission

The purpose of this chapter is to provide guidance and a checklist of tasks that are recommended for planning and writing the Psychological Measurement Portfolio.

Please note that it is not exhaustive, and just represents the minimum needed. The final grade will be based on how you have engaged with the materials, the breadth and depth of your understanding, and the professionalism of the final submission. For more information, please see both the Learning Objectives for the module, and the University’s Graduate Outcomes.

11.1 General

The following points apply to the whole portfolio: - Follow APA-7th edition guidelines by default. If something in this document conflicts with APA-7th formatting, then this document will take priority. (Link to APA-7th Guide)

  • Please do not use serif fonts such as Times New Roman. Use sans serif fonts such as 11-point Calibri, 11-point Arial, or 10-point Lucida Sans Unicode. If you wish, you may use the OpenDyslexic font (Link to OpenDyslexic font)

  • References and tables should be in APA-7th format in line with your other assignments.

  • Avoid using Double Spacing, please use 1.5 spacing unless you have a good reason not to.1

  • Section numbering is optional.

  • Indent the first line of every paragraph

  • The portfolio must be written in British or American Standard Written English (Link to page on standard English).

  • The portfolio should be proof-read for spelling and grammar errors (you may use Co-Pilot to do this - but include the original in the appendix).

  • Use level 1 headings as a minimum - you may use sub-headings if you wish, but do not exceed three levels unless absolutely necessary (it probably wont be). See the Misc chapter (Link to What do you mean by sub-heading levels?)

11.2 Short Glossary

  • Construct: Psychological attribute that is being explored.
  • Percentile: The percentage of people who scored below a particular score (Link to Chapter on standard scores).
  • Raw Score: The score that the Test Taker gets on the test. Might be number of correct answers (e.g. 27 out of 30), or average response to a set of personality questions (e.g. 3.5 out of 5) (Link to Misc section on Raw scores).
  • Standard Score: A score that presents the Test Takers Raw score in the context of a norm group.
  • Test: The method of quantifying a construct. Can also be called a measure, tool, etc.
  • Test User: The person who will administer the test to the Test Taker
  • Test Taker: The person who will sit down and provide answers to the questionnaires.

11.3 Part 1 - The Technical Manual

Starting Points

In part 1 you are responding to one of the two case studies provided on Moodle (Link to Scenarios) and providing a solution to the problem that has been provided. The first step is to carefully read the chosen case study and identify key words that imply the involvement of psychological constructs. For a guide to doing this see Week 4’s workshop materials (Link to Week 4 Workshop video).

Once you have identified the constructs that are of interest in the case study, you need to choose at least 2 and match these to an appropriate measure from the list provided (Link to list of measures).

What do you Mean by Boundary Condition?

In Week 4 I mentioned Boundary conditions and this has caused a little confusion.

There may be a particular policy that you can find indicating that people with a particular attribute (e.g. General Learning Disability) are eligible for a different type of treatment (e.g. a targeted programme tailored for this group of people). If this is the case, and scoring above or below a score on a gold standard psychometric test is part of the diagnostic criteria, then you can discuss this as a boundary or cut-off score when discussing the appropriateness of the construct (not the test). However, you must be clear that the screening test is not intended for diagnostic purposes. For a more in-depth discussion see the following link (Link to technical discussion of boundaries).

Introduction Section

After choosing the measures that you will use to assess the constructs of interest, you will write an introduction section that presents the constructs and measures you have chosen. This will finding and summarising evidence to support the existence of the construct (see Week 2 playlist and materials on construct validation (Link to Week 2 Playlist), evidence of the reliability and validity of the measure that you have chosen (see Week 4 materials (Link to Week 4 Playlist).

In addition, you should try to find literature that has used the construct (and ideally the measure) in a forensic setting. To do this you could find the starter paper in Google Scholar, click on ‘cited by’, and search within the citing articles for key words such as ‘forensic’.

Finally, you might discuss the unique information that each measure will contribute towards the profile.

This will form the introduction section of the manual.

Method Section

You will then write a method section that provides important information about the measures that you have chosen and the norm group that the test taker will be compared with.

Norm Group Characteristics (i.e. Participants section)

This will include the characteristics of the participants for each measure that makes up the norm group. Demographic information can be taken from the Participant Information file, but more detailed information will be available in the stage two manuscript by Wilson and Bishop Link to starting-point papers folder). If you have never heard of a Registered Report before, see the following link: (Link to Registered Report Misc section) You should also provide the approximate date that the data was collected - even if this is just the year. Many methodological information about the data collection should also be included - again, this will come from the Wilson and Bishop stage 2 paper.

Materials and Administration Instructions (i.e. Materials and Procedure)

This should contain everything that the test-user needs in order to successfully administer and score the test. It will include the number of items in each scale, the number of scale points (e.g. how many different likert statements there are), an indication of which items need to be reversed (if any), and the scoring method for the scale (e.g. should they add up all of the responses, or take the mean average). You can get this information from the starter papers on Moodle (Link to starting-point papers folder).

You might also consider how the test user might administer the battery. Have you been instructed to use a particular medium? Is the test timed? Do you need exam-like conditions? It’s unlikely that information like this will be available, so you can include your own instructions.

Psychometric Properties (i.e. Results Section)

You will then write a section detailing the Psychometric Properties of the scale. This will only be based on the norm group - you should not discuss the test-taker in part 1. You will use Jamovi to produce tables for reliabilities, means, standard deviations, percentiles (Link to Jamovi Playlist). You may also include exploratory or confirmatory factor analysis if you wish, but only do this if you can show how it is relevant to the document. After you have this information, refer to the data record sheet and the generic scoring template to get the Classical Test Theory parameters (this was covered in Week 5 – (Link to Week 5 playlist)): Raw score standard error of measurement, T score standard error of measurement, T score standard error of difference. For a short summary of these issues refer to Chapter 4 (Link to chapter on reliability).

In the portfolio template there is a suggested ‘validity’ heading in the Psychometric Properties. This is optional, but could be useful. For instance, if you are arguing that Intolerance of Uncertainty is a useful screening measure because it reflects anxiety, but only in specific situations, you could provide correlations between IUS and GAD to demonstrate that the two constructs are related, but that IUS also provides unique information about an individual that is not captured by GAD-7. If a correlation is high, but not too high, (e.g. 0.4-0.6) this might indicate that there is convergent validity between the two measures, but they are also contributing something unique to the profile. If the correlation is very high (e.g. above 0.9) then you are measuring very similar constructs, and you might as well just use one of the measures.

11.3.1 Checklist for Part 1

Note that the following is a guide and simply represents the minimum to include. You will be marked on how you present this information, and how you justify your decisions, and show that you have understood the course content and met learning outcomes. This will be down to you

Section Task Done
Prelims Choose Scenario
Identify keywords
Select construct
Selected measures
Reviewed literature
Introduction General introduction
Introduce each construct
Discuss construct validity of each
Introduce each measure
Present reliability and validity evidence
Discuss the profile
Method Described norm group
Described materials and administration
Results Calculated norm group characteristics
Checked reliability of the measures
Calculated Classical Test theory parameters

11.4 Part 2 - Sample Feedback

In Part 2 you are presenting sample feedback for a person who has completed your measures. This is not a specific person from the institution, rather it is a template that someone else could base their own report on.

Starting Points

The first step will be to use the norm group data from Jamovi that you prepared in part 1 to fill out the remaining rows in the data record sheet. The first row will be the Test Takers score on the test.

You have been given four z scores – these need to be converted to the scale that your measures are on. Select one z score for each of the measures that you have chosen. You can convert the z score to a Raw score using the following equation:

\[ RawScore = (NormSD \times zScore) + NormMean \]

So for a z score of 1.8, a Raw norm group mean of 2.5, and a Raw norm group standard deviation of 1.2 you would get the following: \[ RawScore = (1.8 \times 1.2) + 2.5 \]

These are the scores that you will be writing up and interpreting for part 2.

Now you can use the rest of the information from the results table to complete the remaining:

  • The percentile equivalent of this score (use the percentile table to get this information)

  • The confidence interval around this score (use the Standard Error of Measurement to calculate this).

  • The confidence interval around their percentile score (use the percentile table to get this information).

  • Their T-Score

  • The 68% Confidence interval around their T score

  • Plot the 68% Confidence interval to get the range that the test-taker is in (see Portfolio template)

11.4.0.1 Should I mention Boundary Conditions?

Remember that the purpose of this screening tool is to provide one source of information that will aid the organisations decision to either:

-A let the person enroll on the group rehabilitation programme immediately

-B refer them for further assessment.

The tool is not for diagnostic purposes, and you, as the psychometrics consultant, are not the one making the decision.

So in terms of the feedback report, you will not be using diagnostic criteria as a boundary value (e.g. a score of x is indicative of…). Rather, any statements will be based on the range that their score is in relative to the norm group, and presented in terms of a typical profile (people who are in the high to highest range for this measure are more likely to…).

The Report

The report should include a brief introduction to yourself, the purpose of the test, the measures that were used and why, each score along with the margin of error surrounding it (68% confidence interval) and a comparison with the norm group (e.g. T scores or percentiles).

Some examples of the type of language used to give feedback is given in Chapter 8 (Link to Test Feedback Guidance Chapter).

It is recommended that you append a complete set of calculations (e.g. Test Data Record Sheet).

A checklist for the things to include in Part 2 can be found in the Portfolio template document but is repeated below too. Again, note that this is the minimum to be included.

11.4.0.2 Checklist for Part 2

Item
1 Do you introduce yourself and/or the organisation?
2 Do you remind the reader of the purposes of the report?
3 Do you introduce each measure and briefly describe how and when each was administered?
4 Do you give a brief description of the construct each tool measures before describing the score for each test?
5 Do you give a rationale and justification for the use of each measure before describing the score for each test?
6 Do you explain clearly the nature of norm group comparison and their relevant characteristics?
7 Do you describe the meaning of the scale (e.g. percentiles) or scales (e.g. percentiles and T scores) accurately and in terms which the test taker could understand?
8 Do you communicate clearly and accurately the score for each measure?
9 Do you communicate clearly and accurately the confidence limits associated with each score?
10 Are any statements implications (e.g. risk,) supported by background information for the test (e.g. validity)?
11 Do you communicate clearly and accurately any score comparisons made across the measures taken?
12 Do you give clear guidance as to the appropriate weight to be put on the findings (e.g. such tests are only one source of information)?
13 Have you briefly mentioned any ethical guidelines associated with the report?
14 Do you give clear closure to the feedback report?
15 Have you appended the record sheet and interval plots

11.5 Part 3 - Critical Reflection

In the final part of the portfolio you will discuss some of the limitations to your scale which could be considered when rolling out a larger screening policy.

This can either be in report format (formal, passive, third-person), or in the form of a letter (formal, second person perspective).

This will include any possible sources of bias in the constructs, scales, or norm group, that may exist as well as suggestions for alternatives or aspects that might be changed (Link to Chapter on test bias and fairness).

You might also wish to include a technical critique of the methods that were used to show that you have understood their limitations and considered alternatives - e.g. issues with the test paradigm that you have used (e.g. Classical Test Theory) and suggestions for other methods (Link to Item Response Theory Video), (Link to very brief description of Generalisability Theory)

This is where you will include any major limitations to the measures that you may have encountered during your literature reviewing.

If you feel that any of the measures are adequate, you could also suggest follow-up tests that might be used after screening.

The whole section must be evidence-based.

11.6 Appendix

Checklist for Appendix

The Jamovi output and Data Record Sheet calculations are requested so that I can see what happened in the event that something goes wrong. The plot of the 68% confidence intervals and Pre-Co-Pilot draft are required.

Item Attached
Jamovi Output
Data record sheet with calculations
Plot of 68% Confidence intervals
Pre-Co-Pilot Draft

  1. My scrolling finger gets sore when double is used↩︎