Fall risk assessment tools

reliability_validity

Fall assessment tools are used to determine a patient's risk for falls on admission to a unit.  Many fall assessment tools have been developed by health care organizations and for specific hospital units. The tools are tested for their reliability and validity, including ease of use, consistency, and ability to accurately predict which patients are most likely to fall.  Some researchers report that the validity scores vary, with 43% to 95% sensitivity and 27% to 78% specificity. Under or overestimating fall risks might miss identifying some patients at risk for falls or limit the independence and mobility of some patients (Park, 2018).

Reliability

Reliability indicates there is consistency in the results when the tool is used under the same conditions. There are several ways this consistency can be tested. The most common are the following:

There is a timed up and go (TUG) test used to measure gate and balance. The patient is asked to get up from a sitting position without using their arms, then walk 10 feet, and sit back in the chair. This should be done in under 10 seconds.

The evaluator checks off the following:

( ) Slow tentative pace
( ) Loss of balance
( ) Short strides
( ) Little or no arm swing
( ) Steadying self on walls
( ) Shuffling
( ) Not using assistive device properly
https://www.cdc.gov/steadi/pdf/TUG_Test-print.pdf

To check for different times, the evaluator would repeat the test on the same person, under the same condition but at two other times. The results of the two different times are compared.
To check for interrater reliability, two different people would observe the same person, often simultaneously, and compare their responses.

To check for internal consistency, after several people used the tool, they would check if a comparison of some of the items against each other received the same number of ratings. For example, one could compare the slow pace scores, loss of balance, and short strides with arm swing, using the walls to steady themselves, and shuffling. That would answer the question do patients who use a slow pace, have a loss of balance, and short strides have little or no arm swing, steady themselves on the walls and shuffle. If yes, from the data, the tool was said to have internal consistency.

The standard is to have at least 80% or higher consistency in the ratings in all these measures.

Validity

Validity measure the degree to which a method accurately measures what it is intended to measure. An important concept is that good reliability does not indicate validity. Just because a tool is consistent in how it measures gait and balance, that does not mean it will necessarily be able to predict a patient's tendency toward falls. We need both in a tool – good reliability and validity.

There are ways to test a falls assessment tool for validity.

It is reported in the literature that unsteady balance, for example, is a predictor of falling. Construct validity could be seen as OK.

Do gate and balance measure all aspects of predicting falls? The answer would be no. We know there are other risk factors like side-effects from medications. That is why this measurement of gate and balance is usually a part of a tool and not the only measurement.

Criterion validity could be determined by comparing the results of this tool with another valid tool that uses a gate and balance measurement. If the comparisons gave similar results, this, too, validates the device.

Predictive validity

The predictive validity of these tools is reported most commonly in terms of their sensitivity and specificity.  Sensitivity measures the proportion of actual positives which are correctly identified as having the issue or condition being measured, in our case, patient falls.  Specificity measures the proportion of negatives which are correctly identified, in our case, the percentage of patients at low risk for falls.

Sensitivity refers to the number of patients with high-risk scores who fell divided by the actual number of patients who fell.  If during the development of the tool, for example, there were 100 patients with high-risk scores for falling and a total of 120 patients fell, the sensitivity would be 83%.  The higher the sensitivity, the better the predictive value of the tool.

Specificity refers to the patients who are at low risk for falls and those who do not fall.  The number of patients with low fall risk scores is divided by the number of patients who did not fall.  If, for example, 500 patients had low fall risk scores and the number of patients who did not fall was 650, the specificity score would be 77%.  Again, the higher the specificity, the better the predictive value of the tool.

Fall assessment tools that have low sensitivity and low specificity will inaccurately indicate fall risk.  When a fall assessment tool has low sensitivity, patients at risk for falls may not be allocated appropriate resources to prevent the falls.  Fall assessment tools with low specificity may indicate a patient is at risk for falls when they are not, but many resources have been allocated unnecessarily (Matarese, 2015).


Instant Feedback:

The sensitivity of a fall assessment tool refers to the tool's ability to predict a patient's risk of falling.

True
False


References

Matarese, M., Dhurata, I., Francesco, B. et al. (2015). Systematic review of fall risk screening tools for older patients in acute hospitals. J. Adv. Nurs. 71, 1198–209.

Park, S.H. (2018). Tools for assessing fall risk in the elderly: a systematic review and meta-analysis. Aging Clin Exp Res. 30(1), 1-16.


© RnCeus.com