2007/08/08 13:45 Gerald Midgley, "Towards A New Framework for Evaluating Systemic and Participative Methods", ISSS Tokyo 2007

2007/08/08 13:45 Gerald Midgley, "Towards A New Framework for Evaluating Systemic and Participative Methods", ISSS 2007

This digest was created in real-time during the meeting, based on the speaker's presentation(s) and comments from the audience. The content should not be viewed as an official transcript of the meeting, but only as an interpretation by a single individual. Lapses, grammatical errors, and typing mistakes may not have been corrected. Questions about content should be directed to the originator. The digest has been made available for purposes of scholarship, posted on the ISSS web site by David Ing.

Gerald Midgley, Senior Science Leader, Institute of Environmental Science and Research (ESR), New Zealand

Gerald Midgely

Acknowledgements

Have been developing an evaluation framework in a project:  Sustainable Development: The Human Development

  • Resource use:  water management, human waste
  • Participative systems methods
  • Will broaden later

Why evaluate systems approaches?

  • Do they add value?
  • Paradigm conflict: quantitative versus action research in local context
  • Propose a new evaluation framework
  • Have developed a questionnaire
  • Limitations of framework and questionnaire
  • Invitation to collaborate in a new international research program

Why evaluate methods?

  • Reflexive practice, need to learn, and evaluations can help
  • Renewed interest in systems thinking by policy makers, more opportunities that can take
  • Help decision-makers understand which are systemic methods

Evidence base:

  • Review in systems and participative methods
  • Vast majority just report practitioners reflections
  • Can be unreliable
  • Worked with Skinner at Hull, reviewing the work of an action researcher, to see if others who participated to see if they thought he had done a good job
    • No relationship between what he thought he did, and others said not
    • Practitioner reflections are unreliable
  • Others develop questionnaires, but there's an issue in designing questionnaires of paradigm blindness, which tells them what a successful intervention might be, but then don't see others
  • Only a small minority triangulate

Another obstacle, beside quality of the evidence base, but paradigm conflict

  • Roe:  Advocating universal approaches (quantitative) versus local approaches (action research)

Universal assume

  • Criteria of relevance can be defined
  • Common metrics can be defined
  • Can compare across multiple case studies

Local evaluations:

  • Accounts for emergent issues
  • Quantative can be useful, but qualitative is critical
  • Local context can't be eliminated
  • Universal knowledge about methods is unattainable, but can still learn

Purposes pursued:

  • Universal assumes to compare methods to pursue similar things and determine which best
  • Local is about learning in a single intervention or a series of interventions

Need a framework that integrates both of these purposes

  • Need to be support reflection on single case studies
  • Yields data useful for both local evaluations and comparisons between methods
    • Sometimes local stakeholder don't want extra question

Framework:

  • Context
  • Purposes of the people involved
  • Methods
  • What can reasonably be said about the methods, given context and purpose
  • Researcher becomes part of the framework, in context, purpose and methods

Framework can be used flexibly, e.g. in a Ph.D. project, or in a single day workshop

Context:

  • No agreeement on what needs to be looked at, since several authors with several contexts
  • Step up a level, to ask ...
  • Boundaries and value judgements, processes of marginalization.
  • Stakeholder perspectives
  • Organizational, institutional, socio-economic and ecological systems
  • Feedback processes and networks

Context: Practitioner Identity

  • e.g. intervention with Mali community, looking at clean drinking water in a community house
  • Firstly, non-Mali
  • Secondly, crown institute, with a background
  • Not only questions that are challenged, but also the researcher

Purposes:

  • Fit between methods and purposes
  • Look for: articulated purposes, hidden agendas, conflicting purposes, mismatches

Purposes: Practitioner Purposes

  • Good fit?  Check with other people
  • Project to help design services for homeless children, with street workers about the project
  • Street workers were suspicious that were just collecting data, took over a year for them to realize that interested in social good

Methods:  Process and Outcome Criteria

  • Process: exploration sufficiently systemic?
  • Did it facilitate effective participation?
  • Outcomes: plans / actions / changes
  • Outcomes, in relation to people's purposes
  • Short-term and long-term outcomes
  • Unanticipated outcomes

Methods: Practitioner's Skills and Preferences

  • e.g. SSM interpreted from very flexible/responsive, through to linear execution

Methods: Other aspects

  • Theoretical assumptions into the method
  • Claus Fass:  Cost-benefit analysis in national parks, assigned utilitarian approach, which marginalized environmentalists interested in wilderness for its own stake
  • Cultural norms
  • Importing a method from one culture to another can cause difficulties

An evaluation questionnaire:

  • Captures data on process and short-term outcomes
  • Filled in by participants immediately following the workshop
  • Must be used immediately after the workshop
  • Contains 
    • Usefulness (5 point scale)
    • Systemic and participative methods (15 questions, 5 point scale)
    • Drawbacks and potential negative side effects (13 questions, 5 point scale)
    • Cultural viewpoint, open ended questions
    • Basic demographics

Found:

  • Majority of people asked only a few criteria that all participative and systemic methods aspire to do well on, but same set
  • Would like to set up for complementarity between methods, rather than one methods is better than another

Produced a questionnaire

  • Most test for validity and reliability, but also tested for usability
  • Validity is usually using a second test
  • Reliability difficult, because can't come back next day, but usability means that people will fill it out/

Used on test cases

Strengths:  nuanced, yet parsimonious

Limitation:  practitioner can interpret events defensively

Limitations:

  • Could work against pluralistic methods, as fewer to compare against
    • Can still use qualitative comparisons
  • Will be testing on validity and reliability on a future projectd
  • If new methods, new attributes won't be measured by the existing instrument
  • Doesn't evaluation non-participative approaches

Invitation for international collaboration