Search This Blog

Loading...

Friday, 1 July 2016

92. Design experiments for local democracy

Photo credit

The notwestminster work we have been doing is seriously great and the folks involved have achieved a lot.  There have been two brilliant events, many ideas generated and lots of new connections made.  From the stuff done so far we have settled on a set of local democracy design challenges to work on - a list of things we want to change.

While this is all good, the next step is to do something practical; to make some stuff.  We had a brilliant Maker Day in February but we need to take it up a notch.

The method I'm suggesting is design experiments.  Here are some notes and first thoughts.

Design Experiments


Recently I have been reading ‘Nudge, Nudge, Think, Think’ by Peter John and others.  A fascinating book - you should read it!  It’s primarily about techniques to encourage behaviour change (e.g. nudge, think) in areas of civic concern such as recycling or volunteering.  It is also about how you test and develop innovations in public service settings.  Specifically it looks at how randomised control trials and design experiments can be used to explore new ways of engaging with citizens.

Stoker and John describe a design experiment like this:
‘…researchers work with practitioners over a period of time to design, implement, evaluate, and redesign an innovative intervention. The group receiving treatment is compared to a comparison group. The aim is to perfect the intervention over several iterative cycles until it is the best it can be [1].
One example in the book is of a video being used to get the voices of the ‘seldom heard’ into the debates at a council's area committee meetings.  The research team asked local practitioners to produce a video that was then shown to different area committees.  Observations at each meeting led to changes to the video and, at the end of the experiment, conclusions were reached about the effectiveness of the approach and how it could be made more effective in future (turns out that the context of the meeting was as important as the video itself).

Along similar lines are some of the experiments being done by Nick Taylor.  See this simple voting machine to encourage community engagement, for example.  Again, a small intervention designed to test the idea that lowering the bar to access improves community consultations (it does).

The design experiment method comes from education where classrooms - a relatively controllable environment - serve as laboratories for experiments.

In the same way the formal process of local democracy in local councils can provide a great laboratory for democracy experiments.  It is a relatively stable and controllable environment and, where you have multiple meetings of the same type, it can support comparisons between interventions and non-interventions.

Design experiments of this type can only tell us what works (or doesn't work) in a particular context and it is important to recognise this limitation. What they can provide, however, is the starting point for experiments on a wider scale in in different settings that might point to more general conclusions.

Advantages


I think there are some real advantages to using design experiments to take forward the #notwestminster design challenges.

1.  It is a manageable approach  
Of course councils don’t have extra resources to dedicate to this type of work but, given that the experiments would be small and hopefully fit with local work that might have been done anyway, I think design experiments are a reasonable proposition.

2.  It will test our assumptions
In our design challenges we have a number of assumptions about how people will behave if different aspects of local democracy are redesigned (people would be better informed if we did this…, people would get more involved if we did that…).  Wouldn't it be great to have some evidence to back this up?  Design experiments, if done robustly, could provide evidence to support our assumptions (or force us to think again).

3.  We can make the most of our network
One of the things I love about Notwestminster is the way it brings together councillors, practitioners, academics, techies and citizens.  Design experiments are a great was to bring people together in small teams to make something new.  We can also share experiments in progress across the network getting valuable input as we go along.

4.  It will give us something to share
By reporting experiments and their outcomes we can contribute to a growing body of knowledge about civic innovations that will be of use to practitioners and researchers alike.  What’s not to like about that?

5.  We can make something worthwhile
Last, but not least, wouldn't it be great to actually make something that makes a difference to a local council and its citizens?  Wouldn't it?

Challenges


In developing the method and approach there are also some challenges to be overcome.  Sarah Cotterill and Liz Richardson [3] have highlighted some of these as they relate to working with local government and a couple are particularly relevant here:

1.  Measuring the difference
If we are serious about research we need to be clear about what ‘outcome measures’ will tell us what we need to know.  Ideally we will want ‘objective’ measures (e.g. voter turnout) but sometimes we will need to rely on people’s perceptions.  The classic concerns about validity and reliability apply.

2.  Mixed methods
If we want a rich picture about what is happening around a particular experiment we will need to invest in a range of sources of evidence.  What research techniques should be used? Do we need a 'tool kit'?

3.  Organisational commitment
Cotterill and Richardson point to organisational difficulties as a reason why many experiments in local government fail.  How can we ensure that experiments won’t get neglected when other pressures and priorities come into play?

Design experiments also require a different way of working and participants need to be clear about that at the start.  As John et al suggest:
'Design experiments favour small-scale innovation, in a relatively controlled environment, where the dialogue can take place with a small range of policy-makers and workers, all of whom have signed up to a new way of doing business and to intense researcher-practitioner interactions.' [2] 
But what should this commitment look like and how can we make sure that it sticks?

4.  Good governance
Design experiments in councils would be managed by ‘design teams’ involving councillors, practitioners, researchers, practice advisors and others.  Being clear about roles and how decisions are made will be important – but how should this look exactly? How should the different experiments be linked together and managed as a whole (if at all)?

So, some initial thoughts.  Let’s see what can be done with them.


[1] Stoker, G and John, P (2008) Design Experiments: Engaging Policy Makers in the Search for Evidence about What Works, Political Studies, Vol 57 (2).
[2] John, P et al (2013) Nudge, Nudge, Think, Think, Experimenting with Ways to Change Civic Behaviour, Bloomsbury, London.
[3]  Cotterill, S and Richardson, L (2010) Expanding the Use of Experiments on Civic Behavior: Experiments with Local Government as a Research Partner, the Annals of the American Academy of Political and Social Science, Vol 628

Tuesday, 24 May 2016

91. An opportunity assessment for public services

Photo credit

I'm developing something for our Public Services Board (local strategic partnership) that will help them set up working groups to address priorities that they have identified.  I'm sharing this partly in search of feedback and partly as it may have applications elsewhere.

Essentially this will be a template to help them work through proposals and to feel confident that they are doing something that is achievable and that will make a difference.  One challenge will certainly be framing a distinct ‘product’ and avoiding the temptation to set up groups that disproportionately spend time discussing problems rather than making solutions.

Below is my first draft of an Opportunity Assessment – this is an idea that comes from agile product development but seems ripe for adapting into a public service context.  Here is Marty Cagan's original post that I have worked from.

Turns out that Carl Haggerty has been thinking about something similar and I've been able to have a peak at some of that work in drafting the list below.

Finally, the work of Public Services Boards in Wales is geared to promoting sustainable development under the provision of the Well-Being of Future Generations (Wales) Act 2015.  The final five questions reflect the five 'ways of working' that are identified under the Act.

Any comments / suggestions very welcome.

An Opportunity Assessment for Public Services


The following questions need to be answered before work is initiated.

Product Opportunity Questions

  1. Exactly what problem do we want to solve? (value proposition)
  2. For whom do we solve that problem? (target population)
  3. How important is it for our organisations that we solve this? Is it mandatory? Does it help address service pressures or budget challenges? (organisational imperative) 
  4. What effective solutions already exist? Are there solutions that can be borrowed from elsewhere or local solutions that can be scaled up? (what works)
  5. Why are we the right group to do this? (our differentiator)
  6. Why now? (urgency)
  7. How will we get this to happen? (realisation strategy)
  8. How will we measure success? (metrics)
  9. What factors are critical to success? (solution requirements)


Sustainable Development / Ways of Working Questions

  1. How will long term needs be safeguarded? (Long term)
  2. How will the prevention agenda be supported? (Prevention)
  3. How will the national well-being goals be addressed? (Integration)
  4. How will collaboration be enhanced? (Collaboration)
  5. How will affected citizens be involved? (Involvement)


Thursday, 12 May 2016

90. Use zero-to-ten scale questions to assess wellbeing


Photo Credit 

Scaling questions ask people to assess how close they are to a self-identified preferred future using a zero-to-ten scale.  The technique is used in solution focused brief therapy but I am suggesting that it can be adapted for the process of producing a well-being assessment.  The purpose of this post is to set out how.

Well-being Assessments in Wales


One of the things that local partnerships have to do to comply with the Well-being of Future Generations (Wales) Act is produce a Well-being Assessment.  Guidance here if you like the detail.

It is different from the needs assessments that have been required in the past in a couple of important aspects.

First it is based on an asset rather than a deficit model ('what’s good and how can we have more of it?' rather than 'what’s bad and how can we fix it?').

Second there is an expectation that the assessment will be linked logically to an analysis of the likely response (see my post on Driver Diagrams for an example of what this might look like).

These two points, plus the emphasis on achieving a ‘preferred future’, suggests to me a big overlap with solution focused approaches (see my previous post about this) of which scale questions are an established part.

Scale Questions 


Scale questions are not a research method nor are they a scientific tool.  What they are is a way of framing the conversation about a particular outcome and, most importantly, a way of making the link between assessment and action.

Within the therapeutic context, Ratner et al describe it like this:

A solution focused scale is a way of enabling a client to focus on the degree of progress towards their preferred future; it has nothing to do with assessing the extent of their problem. After a client has explored how they have got to where they are on the scale, they can be invited to consider what will tell them (and others) that they have moved to a point further on.  It is important to remember that the scale is the client’s subjective view of the situation.  It is not a scientific assessment! [1]

So, answering a scale question is a matter of judgement and perception not mathematical certainty.  This is of course true of any assessment particularly when a number of different sources of evidence are involved. There is no way, for example, of drawing together quantitative and qualitative research other than by an informed judgement.

In the context of a well-being assessment the appropriate point on the scale can be debated with a view to reaching a consensus.  This is no different from the assessment process that an interview panel goes through to agree a mark for a candidates question or that a tender panel goes through to assess a presentation from a perspective contractor.

The second point that can be drawn from the quote above is that the scale question technique calls for a focus on what positive things have got us to the score that we have agreed.  So, for example, we might think about the outcome; ‘Children have a good start in life’ and agree that it is a ‘5’ for the local area.  Considering what makes it a five may draw attention to excellent play schemes or Flying Start provision – assets to be developed rather than problems to be fixed.

The next step then is to ask ‘Ok, if we are a ‘5’ now, what would get us to a ‘6’?

Hence the technique moves quickly from ‘situation’ to ‘response’.

Applying the Technique to Well-being Assessments


So, what would a well-being assessment, based on scale questions look like?

The first step is to be clear about the preferred future that is being assessed.  This is likely to mean a set of outcomes, broken down into sub outcomes perhaps, that describe the future conditions you want to see.  These are set locally; as with the therapeutic context, the fact that they are 'self-determined' is really important.

The Well-being Assessment will then be structured around those outcomes providing a one-to-ten score for each based on the available quantitative and qualitative evidence.

Here is my suggestion of what each section might look like.

  1. The evidence will be summarised for each outcome starting with "One a scale of zero to ten, where zero represents the worst things can be and ten the best they can be, we gave this an 'x'"
  2. Next, under the heading “‘Why we gave it an ‘x’”, for each outcome the assessment will discuss “What makes it an ‘x’?”  This will highlight the assets that contribute to the score whether, physical, cultural, services, initiatives or otherwise.
  3. The next question is ‘How would we know if we had got one step higher?’  Again, this is a matter of judgement to be negotiated collectively.  The answer might point to the evidence that would be required to push the score one point up the scale.
  4. Finally the assessment should propose, for each outcome, “What would make it an ‘x+1’?”  This might point to an extension of existing schemes or approaches that have worked elsewhere.


Benefits of the Technique


Approaching the assessment in this way has a number of advantages:

  • As a solution focused approach it places attention on assets rather than problems
  • The scale questions allows us to be both ambitious about the future AND realistic – while the statement that ‘children have a good start in life’ may be a desirable if ultimately unachievable in full, the focus on ‘+1’ ensures that the conversation remains manageable and practical.
  • Scale questions move us quickly from an assessment of the current situation to what the response might be
  • Partnerships will be required to identify a small number of ‘well-being objectives’ and to address them within Well-being Plans.  Using scale questions provides allows progress against different outcomes to be compared (remembering that the scores represent judgements not facts).
  • The scale question technique can be more widely used for research and engagement outside of the well-being assessment process - a new tool in the engagement toolbox.
  • It is easy to explain and to get people engaged – it lends itself easily to public involvement – a public survey can be fed in, or conducted in parallel, for example.  Public results can be reported separately and compared to the partnerships' scores - big differences will provide an interesting topic for debate!



[1] Ratner, H, George, E and Iveson C (2012) Solution Focused Brief Therapy:  100 Key Points and Techniques P115.

Monday, 28 March 2016

89. Twitter tips for scrutiny teams




Last week we had a good old fashioned tweetchat about how scrutiny teams can make best use of twitter.  We used the #scrutinytweets hashtag if you want to check it out.

It was initially planned as an exchange between the Swansea and Birmingham teams but others joined in - all in all it was a lively and thoroughly enjoyable hour so thanks everyone who chipped in, hope you found it useful.


By the way, if you are thinking about using social media for scrutiny you might also like this piece I wrote after after the Centre for Public Scrutiny ScrutinyCamp event in 2014.

Anyhow, here are the four tips from the tweetchat that I summarised at the time:

1. Twitter is one small part of a wider approach to public engagement for scrutiny 


Yes, use twitter but it is not a panacea and not everyone uses it. However, it is relatively low cost to use so look for the things it does do well.

2. Use team accounts but also look at using individual accounts as well


Team accounts should be the default. They are good for presenting the formal face of scrutiny but there may be times when a more personal touch is required. Use the initials of the individual tweeting in tweets or cultivate 'professional' accounts for team members. As scrutiny officers we know all about being careful with what we put into the public domain - so it shouldn't be a problem right?

(There does seem to be a decline in team accounts recently by the way, it may be that they are seen as an additional cost which would be a shame if true. Even if a team account is not an option I still think that individual scrutiny officers can use twitter to add value to their practice without it being too much of a song and a dance...)


3. Tagging really helps to get stuff out there and is a 'tap on the shoulder' for people who might engage / share


While there is nothing wrong with just 'putting stuff out there'- including one or more twitter handles in a tweet makes sure that it reaches the people that it should.  People follow so many accounts they can easily blink and miss something.


Did you mention an organisation in a meeting? Tag them. Did you want an organisation to complete an online survey? Tag them.  Did you want councillors to share their scrutiny work with residents? Tag them. Did you want an evidence giver to know that the report they contributed to has been published? Tag them.


The people you tag might also be more inclined to share stuff if it is directed at them.


By the way, hat tip to the Essex Democratic Services for the 'tap on the shoulder' metaphor.



4. Scrutiny teams should make more use of twitter to share practice and possible joint projects


Finally we agreed that we all should be much better at using twitter to work together as scrutiny teams.  

Definitely!


Lets do it.