Fall risk assessment tools
Fall assessment tools are used to determine a patient’s risk for falls on admission to a unit. There are many fall assessment tools that have been tested as well as assessment tools that are developed by health care organizations and or for specific units. The tools are tested for their ease of use and ability to accurately predict which patients are most likely to fall. The predictive validity of these tools is reported most commonly in terms of their sensitivity and specificity. Sensitivity measures the proportion of actual positives which are correctly identified as having the issue or condition being measured, in our case, patient falls. Specificity measures the proportion of negatives which are correctly identified, in our case the percentage of patients at low risk for falls.
Sensitivity refers to the number of patients with high risk scores who fell divided by the actual number of patients who fell. If during the development of the tool, for example, there were 100 patients with high risk scores for falling and a total of 120 patients fell, the sensitivity would be 83%. The higher the sensitivity the better the predictive value of the tool.
Specificity refers to the patients who are at low risk for falls and those who do not fall. The number of patients with low fall risk scores is divided by the number of patients who did not fall. If, for example, 500 patients had low fall risk scores and the number of patients who did not fall was 650, the specificity score would be 77%. Again the higher the specificity, the better the predictive value of the tool.
Fall assessment tools that have low sensitivity and low specificity will inaccurately indicate fall risk. When a fall assessment tool has low sensitivity, patients who are at risk for falls may not be allocated appropriate resources to prevent the falls. Fall assessment tools which have low specificity may indicate a patient is at risk for falls, when in fact they are not, but many resources have been allocated unnecessarily.
Another important part of the fall assessment tool development is testing for inter-rater reliability. This tests how closely two people using the tool on the same patient would show the same results. If a tool has good inter-rater reliability it indicates the items are clear and easily interpreted. An inter-rater reliability should be 88% or higher to be considered acceptable.