Reliability/consistency and scoring

Moderators: statman, Analyst Techy, andris, Fierce, GerineL, Smash

lgrofils
Posts: 5
Joined: Wed Jul 02, 2014 8:17 am

Reliability/consistency and scoring

Postby lgrofils » Wed Jul 02, 2014 9:25 am

Hello everybody,

I really need your help and I hope you could help me in this matter.

I am using scales (Likert) developed in order to mesure a differentiated four-field view of career aspirations. The existing four career aspiration scales consisting of 49 items. Each item described a feature characteristic for one of the four career fields. Each individual is most aspired by one of the four scales.
Each career field is depending on the tight/loose coupling (interdependancy between company and individual) and on the career stability/variability and therefore can be put in a matrix.

The only information I have is the following:
Items with (-) must be reversed before calculating a mean score. The last scale was developed afterwards upon finding that our respondents mainly differentiated between CW and the other three fields.

Items Company World (tight coupling-stability):
4, 6, 7(-), 8(-), 13(-), 14, 15(-), 16, 17, 23, 25, 29(-), 32(-), 33(-), 38(-), 40(-), 41, 42(-), 44, 45(-), 46, 47(-), 48, 49(-)
Items Free-Floating Professionalism (tight coupling-variability):
3, 4(-), 6(-), 17(-), 20, 23(-), 25(-), 27, 32, 38, 40, 42, 43, 44(-), 48(-), 49
Items Self-Employment (loose coupling-stability):
1, 2, 11, 15, 17(-), 29, 33, 39, 47
Items Chronic Flexibility (loose coupling=variability):
3, 9, 10, 13, 15, 23(-), 29, 32, 33, 40, 41(-), 42, 44(-), 45
Items Organizational (vs. post-org.) career aspiration:
4, 6, 7(-), 8(-), 13(-), 14, 15(-), 16, 17, 23, 25, 29(-), 32(-), 33(-), 38(-), 40(-), 41, 42(-), 44, 45(-), 46, 47(-), 48, 49(-), 3(-), 20(-), 27(-), 43(-), 1(-), 2(-), 11(-), 39(-), 9(-), 10(-)


I have two issues to analyse those scale:

Firsly: How do I caclulate the reliability of the four scales? I tried Cronbach alpha but it is very low (<0,5) for each scales... I suppose it is normal as I suppose that some items are measuring coupling and some others are measuring variability so correlations are low... (but I don't know which items are measuring coupling and variability respectively).

Secondly: after having calculating the score for each scale for each individual, How do Idetermine the field to which each participant most aspired? I am thinkink using z-scored and allocate to each individual the career field with the highest z-score. What do you think?

Thank you in advance :D

Lara
GerineL
Moderator
Posts: 1477
Joined: Tue Jun 10, 2008 4:50 pm

Re: Reliability/consistency and scoring

Postby GerineL » Wed Jul 02, 2014 10:48 am

Hi lara.
lgrofils wrote: Firsly: How do I caclulate the reliability of the four scales? I tried Cronbach alpha but it is very low (<0,5) for each scales... I suppose it is normal as I suppose that some items are measuring coupling and some others are measuring variability so correlations are low... (but I don't know which items are measuring coupling and variability respectively).
This depends on a couple of things, but first:
Is this an existing questionnaire that you want to test the reliability for for your data, or is it a new scale?
In case of the latter, it may be useful to perform a factor analysis, to see whether all the items are indeed connected to the specific scales you imagined.

To calculate reliability, it is important to first recode the reverse coded items, and then calculate the differences. Maybe this is something you could try first.
Secondly: after having calculating the score for each scale for each individual, How do Idetermine the field to which each participant most aspired? I am thinkink using z-scored and allocate to each individual the career field with the highest z-score. What do you think?
The answer to this question seems to be more of a theoretical decision than a statistical one. Are there no instructions provided for this with the scale?

You first have to determine when people are in each scale.
Suppose people score high on 2 scales, but slightly higher on one, what do you want to do with this person?
Etc. So make your decisions based on theory first, and then determine how you can do this with spss.



Just a note: I don't really understand the "(loose coupling=variability)" etc, but I don't know whether that is necessary to answer your question...
lgrofils
Posts: 5
Joined: Wed Jul 02, 2014 8:17 am

Re: Reliability/consistency and scoring

Postby lgrofils » Wed Jul 02, 2014 1:08 pm

Thank you for your quick reply!


This is an existing scale.
I have already reversed coded items and then I did a factor analysis. For the field company world for example, most correlation coefficients are under 0,3... However KMO is above 0,6 (0,815) and the Bartlett's test is significant (p=0,00). And when I when I look at the total variance extracted, the cumulative % of variance is 68,7%. I suppose it is not enought to proove the reliability...

I told you about the variable "coupling" and "Stability" because I though maybe the low correlation between items is due to the fact that in one scale some items are measuring coupling and some are measuring stability which are two different variables that are not especially correlated...
To calculate reliability, it is important to first recode the reverse coded items, and then calculate the differences.
What do you mean by "calculate the differences"?



In the existing study it is said:
"The aspiration scores for each field were converted to percentile ranks and then to zscores by means of area transformation to make them comparable and to determine the field to which each participant most aspired." I will do this.

Thank you very much for you help!
GerineL
Moderator
Posts: 1477
Joined: Tue Jun 10, 2008 4:50 pm

Re: Reliability/consistency and scoring

Postby GerineL » Wed Jul 02, 2014 1:19 pm

Just a note though: It makes no sense to me to compute reliability across items you know are not alike, or to compute a scale score (assuming this is a mean score or something alike) on items that are not correlated.
Why would you want to do that?
lgrofils
Posts: 5
Joined: Wed Jul 02, 2014 8:17 am

Re: Reliability/consistency and scoring

Postby lgrofils » Wed Jul 02, 2014 2:53 pm

Is it not mandatory to check the consistency of the scale before using it? Actually they are doing it in the existing study (source) but I don't know how...
I need scores in order to determine in which field each individual is asigned...
GerineL
Moderator
Posts: 1477
Joined: Tue Jun 10, 2008 4:50 pm

Re: Reliability/consistency and scoring

Postby GerineL » Fri Jul 04, 2014 9:56 am

What I mean is: If you know beforehand that the total scale does not make any sense (because it doesn't measure 1 thing), why would you use it?
If you know it is not reliable, don't use it.

But probably I just don't understand this composit scale / the scores you used...

Who is online

Users browsing this forum: No registered users and 5 guests

cron