Georgia Leith Impact Analyst at Crisis shares her experiences of using Success Case Methodology to fill in the gaps in knowledge as used in Crisis' recently published Housing Support Evaluation. One of the more mind-bending frustrations of charity evaluation work is trying to find a way to identify ‘what works’ without the appropriate data.
Evaluators dream of randomised controlled trials, with every possible datum collected at every possible stage. However, in the real world, this is rarely possible. We don’t want to ask colleagues to offer different support to different people, and we don’t want to withhold support for our client groups who were unlucky enough to end up in the ‘control’ condition. In some service delivery organisations, there is no fixed ‘journey’ through the service, so enforcing particular activities or interventions for the purpose of evaluation would be against the ethos of the service we want to understand.
At Crisis, we wanted to evaluate one of the aspects of our service for our clients (‘members’), the housing support offer. This offer is primarily coaching support, with Crisis Coaches working one-to-one with members to help prevent losing their home, to access accommodation, or to keep a new home they’ve just moved into after being homeless. Our case management database tells us when a coaching session happens, and how long it lasts, and some case notes on the content. Separately, our database tells us whether (and when) a member reaches a ‘successful’ outcome in terms of homelessness being prevented or ended. What is missing, and what we’re left wondering about, is the fine-grained detail that went into each session, what skills and knowledge were being used by the coach, and, most importantly, what specific interventions made the difference in supporting the member to reach this successful outcome.
To plug this gap in our knowledge, we decided to apply a Success Case Methodology to our evaluation, the first attempt from our Evaluation team.
What is Success Case Methodology?
Rather than randomly split clients into two (or more) categories, who receive different interventions, and then compare outcomes between the groups, the success case method works backwards. We already know who had positive outcomes, because it gets recorded in our database; therefore we have a pre-existing experimental group. They are not a ‘random’ group, because they are the successful cases – identifying what’s not random about them (similarities in demographics, situations, and interventions by Crisis and others) is key to uncovering what works.
How did we apply the Success Case Methodology to our evaluation?
We trained our Coaches to submit extra data into an online survey over a period of three months. When a Coach recorded in Crisis’ case management system that a member on their caseload had reached the housing outcome they were working towards, they then submitted details about the case into the survey. We asked for:
Demographics of the client when the coaching started, along with their living situation and details about their case.
Ten open text fields for describing discrete actions and interventions by the coach that contributed towards this housing ‘success’. Coaches were asked to describe what the action was, how they did it, and how this action made a difference in the member’s journey to their housing ‘success’.
Ten further open text fields like above, for actions by the member (to account for self-determinism and initiative of our members)
Ten further fields for actions by external organisations (so that we don’t end up attributing all success to Crisis, and to understand better how partnerships work)
This programme of data collection involved a lot of planning, redrafting, testing, and training to get to this point. We needed to ensure shared language and plenty of buy-in from Coaches.
The evaluation team cleaned this data for analysis, removing actions from the survey that didn’t clearly demonstrate having made a direct effect on the successful housing outcome. We then developed a coding scheme of types of intervention, emerging from the action descriptions.
Coding the actions meant we could then identify the most frequent types of actions, those that featured in the majority of successful cases.
Analysis of this data provided us with a wealth of extra information. We were able to look at:
Which demographics were most highly representative of ‘successful’ cases, in comparison to the average caseload of these Coaches (from the case management system)
Whether ‘success’ is more likely for cases aiming for particular housing outcomes than other outcomes that Crisis records
Which of the Coach interventions were more frequently cited in successful cases and which were less frequently mentioned
What work done by the members themselves directly contributed to their reaching a housing ‘success’
Which interventions by other organisations (with different specialisms) were also part of reaching a success (helping identify the boundaries of the Crisis service)
The findings from this analysis were triangulated using focus groups with members engaging with Crisis’ housing support offer, and interviews with partners and local experts. This additional data also bolstered our evaluation by providing insights into how the more frequently recorded interventions ‘worked’, showing us the mechanisms that make these interventions lead to successful housing outcomes for members.
What this method enabled us to do
Crisis’ housing support offer is broad, varied and highly responsive to each person using the service. Using the Success Case Methodology, we were able to shed light on which interventions appear to most often make the difference for our members.
The impact is substantial for Crisis: insights from this evaluation have helped the Client Services directorate reflect on their strengths, and recommendations have initiated discussions on how they might re-allocate their resources (time, finances) and models of delivery, so that the interventions that are most impactful for members are maximised.
We have also produced recommendations for the housing and homelessness sector on ‘what works’ in supporting people to access and keep housing.
Not all plain sailing
The Success Case Method wasn’t a cure-all for our evaluation woes. There were flaws which undermined the findings, and lessons we learned for using the method in the future. Below is a summary of the learning:
Requiring extra data We were asking Coaches to fill in a survey on top of all the data they have to input daily into our case management system, providing extra information in great detail. This is a big ask for busy caseworkers, and requires cross-organisational buy-in. It has initiated some follow-up conversations about how our case management system could build a light-touch version of this activity in business-as-usual data collection
Relying on memory, and knowledge and awareness of member’s case Our Coaches were asked to describe all the actions and interventions they could recall, and the details. This means that inevitably we missed out on parts of the case that were not salient to them at the time of completing the survey. Coaches described the work they did, and the work they knew about that was done by our members or other organisations, however it is likely that other work went on that they did not know about – by the member, by other staff, by other organisations.
Not possible to say ‘what doesn’t work’ The Success Case Methodology identifies ‘successful cases’ and the work in those cases that makes a difference. However, ‘unsuccessful cases’ are not explored; therefore it is impossible to say what doesn’t work. This is because the housing support offer is optional for members, and all the different elements of it are optional; therefore we would not be able to ascertain whether a ‘non-success’ was due to not accessing particular interventions (or accessing interventions in other, non-housing-related activities within Crisis services), or because of other factors we weren’t capturing, or indeed because they hadn’t yet reached the housing success they were aiming for.
Resource-intensive This evaluation took over a year from initiation to publication, with method development, coding and analysis covering a substantial portion of this. Running an evaluation of this magnitude, even with the benefits of a designated project team, a well-developed case management system and general buy-in from staff across the organisation, was still highly time- and resource-intensive. Other organisations could scale this down by requiring fewer ‘cases’ to analyse (we had 65) and foregoing the extra methods (focus groups and interviews), stripping it back to just the Success Case Methodology.
Despite all of these hurdles, this rather non-traditional methodology provided us with a whole new perspective of a particular part of our offer, and actionable and impactful recommendations for Crisis and for other organisations. And isn’t that every evaluator’s dream?
You can read more about Crisis Housing Support Evaluation on their website here
We love to hear from out members about the work they are doing. Got something to share? Why not get in touch about writing your own blog.
Comments