comment 0

The struggle of bringing research into climate change policy

I read a vey interesting article in the Guardian Weekly about the struggle of science research in the United States following the election of Donald Trump as president (The climate change battle dividing trump’s America). Funding for climate change research is being cut. A climate change sceptic has been appointed as head of the Environmental Protection Agency.  A couple fo things caught my attention in this article:

  1. Yes, to inform policy decisions with scientific knowledge and research is difficult
  2. Science and evidence for policy is under attack from the conservative right but also from a postmodern strand fo leftwing discourse which views science as a social construct
  3. Politicians in the US, UK and Europe have become too technocratic, unable to to view issues from a personalised emotive basis.
  4. Climate change scientific in the US are thinking to run for Congress, thus becoming politicians to bring more evidence and science into policy

35 days 35 photos 35mmThese are difficult times for making the case that research, knowlede, science matter for informing and improving policy decisions, particularly in the West.  Luckily, there are countries who are investing in evidence and seek ways to bring more evidence into policy, like Louise Ball of the Overseas Development Institute has highlighted in a recent blog: Developing and emerging countries buck the ‘post-truth’ trend.

comment 1

Doing Development Differently means Doing Monitoring, Evaluation & Learning Differently too

Recently, we attended a two-day workshop on ‘Implementing the New Development Agenda: Doing Development Differently (DDD), Thinking and Working Politically (TWP) and Problem Driven Iterative Adaption (PDIA).’ The event was co-organised by the Knowledge Sector Initiative, the KOMPAK programme, and The World Bank in Indonesia and attended by practitioners, researchers, government and other partners – and it was great to see how the debate is becoming more mainstreamed and nuanced. With the benefits of adaptive programming firmly accepted (at least by those in attendance), workshop sessions provided space to explore more in-depth questions, such as, how can you monitor, evaluate and learn (MEL) from adaptive programmes?

MEL is particularly important for the quick feedback loops needed to inform adaptive programming. The session discussion was really interesting, and it looks like there may be an emerging DDD MEL community of practice. Broadly speaking the discussion identified two main challenges and drew out experiences that show how these can be overcome:

1. The need for an integrated MEL team that encourages a culture of learning

Doing Development Differently requires programme teams to contribute actively and explicitly to programme learning . For example, by providing feedback and reflection in meetings or after action reviews. This should help to inform programme decision-making. However, programme teams often do not have the necessary skills to do this – or they do not see it as a part of their day-to-day work, and don’t make time for it.

For too many development programmes, the monitoring and evaluation (and only sometimes learning!) unit is staffed with M&E experts who specialize in meeting donor accountability reporting requirements. They tend to be separate from the work of the programme teams, and from day-to-day interaction with partners. They develop their own plans for monitoring activities and rarely involve the programme team and partners.  What’s more, the M&E manager is often not part of the programme’s senior management team.

Some solutions we identified:

  • Programme team leaders need to invest in developing different learning capacities within teams.

Programme officers’ job descriptions should include monitoring and learning tasks, as well as soft skills such as facilitation to enable better participation in programme design and activities.

  • The programme’s senior management team should have strategies in place to develop a learning culture within their teams.

Learning cannot be imposed on the staff. It has to emerge in a participatory way. Examples include rewarding writing and publications, creating space for open discussion about what works and what does not, and generating evidence that being involved in monitoring and learning actually helps to improve your day-to-day work with partners.

  • MEL teams and units should be set up only if they are really needed and add value.

If the MEL work done by the programme teams is sufficient, then there is no need to have a dedicated MEL team. Where they are necessary (for instance in larger programmes), greater collaboration between the MEL and programme teams can be fostered by creating space for reflection and sharing that has a clear purpose. For example, the purpose may be to inform programme staff about decisions, validate evidence of progress, or to reinforce the programme goals and approach. Another way, would be to involve MEL staff in programme activities, such as designing prototypes.

  • All this requires a flexible environment that provides a real delegation of authority and decision-making within teams.

2. The need for MEL frameworks that are fit for purpose (to enable learning and adaption, rather than only for accountability and milestone reporting)

MEL frameworks and approaches are often produced too late in the programme cycle, and are often overcomplicated.  There are two reasons for this. First, there is sometimes a misunderstanding about the expectations of the funder. Second, MEL plans are often a key deliverable that triggers a milestone payment to the contractor and therefore need a lot of detail.

Another problem is that adaptive programmes – especially in the field of social change and policy innovation – may implement several pilots at the same time. Monitoring and learning from them results in rich case studies and stories of change. The reporting to the funder is often in an aggregated form, which loses important information about patterns and differences. When the information becomes too generalised, it is less useful to inform the funder’s investment decisions.

Some of the solutions identified were:

  • Programme leadership could take the initiative to design reporting processes that start from what is most useful for the programme- and discussing with the donor changes to reporting requirements.

For example, the case could be made for annual progress and learning reports, with very brief highlight updates on a quarterly or six-monthly basis.

  • MEL systems have to be fit for purpose but do not “over-engineer” the MEL framework and approach.

Start small. Be adaptive and test a few simple tools and questions that help you learn. Work with funders to select tools and processes that work for the programme. And invest in team capacities and capabilities that really help to inform an adaptive approach.

  • MEL teams have to be integrated into the work of the programme.

Much of the above discussion has focused on the monitoring aspects of MEL – ongoing gathering and reporting of data.  Which leads to the question – is there a role for evaluation (discrete studies) to support adaptive programming?  And if so, what might they look like?  But that’s a discussion for another blog (including how we define evaluation as compared to monitoring).

This article was originally published on BetterEvaluation. Read the original article.

comment 0

On the Indonesian Knowledge Sector: two new working papers

At the Knowledge Sector Initiative we have just published two new working papers on various aspects of the Indonesian Knowledge Sector and on evidence to policy processes, systems, and experiences. I really enjoyed working on these papers.

Is Measuring Policy Influence Like Measuring Thin Air? The Experience of SurveyMETER in Producing Three Episode Studies of Research-based Policy Influence

Arnaldo Pellini, Tanty Nurhayati Djafar and Ni Wayan Suriastini

Is Measuring Policy Influence Like Measuring Thin Air?This working paper reflects on the experience of SurveyMETER, a policy research institute based in the Yogyakarta, in testing the use of episode studies to describe a policy influence and policy engagement process that SurveyMETER had been leading. Episode studies are one of the tools that can help policy research organizations and think tanks to document the uptake of their research work and/or the degree to which the evidence they have produced has informed policy processes and policy actors. A key lesson from the experience of SurveyMETER is that the investment of time and resources to produce the episode studies is the result of the intent and commitment from the leadership of the organisation to learn about what works and what does not in their policy engagement and policy influence activities.

Linking Values and Research Evidence for Policy Advocacy: The Journey of the Indonesian Forum for Budget Transparency

Inaya Rakhmani, Arnaldo Pellini and Yenti Nurhidayat

Linking Values and Research Evidence for Policy Advocacy

Indonesia is the largest economy in Southeast Asia but ranks only 88th (out of 167 countries) in Transparency International’s 2015 Corruption Perception Index. Anti-corruption reforms in Indonesia have been slow due to generalised expenditure inefficiencies, and the misuse both at national and local levels of public funds. The government is trying to address these issues but these efforts also require the involvement of non-state actors. Civil society organisations, have been involved in this policy issue since 1998. SEKNAS FITRA is one of them. This policy research and advocacy organisation has played a prominent role to push for more budget transparency in Indonesia. This working paper describes how SEKNAS FITRA has evolved over the years and how, by linking policy research and advocacy, it has become a key non-state actor in this policy area.

 

 

comment 0

Cycling to work in Jakarta: can we start to hope?

It takes me about 20 minutes to get to the office and I saw not one, but two other cyclists on their way to work. And guess what? Both were riding a Brompton as I do.

The first cyclist  happened to follow my same route. I tailed him all the way until Senopati. He rode a red Brompton, full of accessories: front carrying bag, saddle pouch, retractable mirror fitted on the handlebar. He wore the compulsory helmet and anti-pollution mask.

I saw the second Brompton rider few minutes later as I was starting to push my bicycle up to the ramp of the Transjakarta footbridge.  A light green Brompton, inconspicuous, with only basic accessories. 239B576200000578-2854513-image-8_1417344393406

I do not know about you, but I have the impression, since I came back from the annual leave last July, that there are more people in the streets taking their bicycles to go to work. Don’t get me wrong. It is not that the streets are now flooded by bright colour bicycles. But from one cyclists I could spot every three weeks , I can now see three or four a week. Even, two on the same day as last Wednesday. Is something changing? Is this the beginning of a shift that will bring more people on their bicycles and less cars and motorbikes in the streets? Can we start to hope?

After all, Jakarta is among the top 10 cities with the worst traffic in the world. Traffic jams cost to the city economy an estimated $3b annually. In 1999, which is one of the most recent estimates, the health costs associated with air pollution in Jakarta were estimated at $220m.

The city administration is doing a lot for addressing its traffic problem. It has created the Transjakarta bus network system. It is building the first north -south MRT line with both above the road and underground segments. This is a massive investments that finally took off when, now President, Joko Widodo was elected Governor of the city back in 2012 .

These investments are attempts to address the traffic congestion problem and could be complemented by smaller investments to enable more environmental friendly ways to commute to work or go to the supermarket or the grocery store. These could be bicycle lines that are separated and protected from cars and motorbikes. Not everyone will be able to commute to work using their bicycle. Some, but not all, of my colleagues, for example, live too far to do that. The ones who live close enough I think they would commute with their bicycle if the right infrastructure would make it safe.

I see more cyclists in the streets lately. Maybe it is just my imagination or by chance, but hopefully we can really  start to hope that cycling to work becomes more accepted and practiced.

comment 0

Learning about learning in an adaptive programme

I am re-posting here the blog published last week by Better Evaluation where Fred Carden and I discuss about learning in an adaptive programme. Better Evaluation has started a conversation to answer questions such as:

  • How relevant are these ideas for our work?
  • How different is learning in an adaptive programme compared to what we already do?
  • What are some challenges in doing evaluation in ways that support adaptive management?  How can they be overcome?
  • Are there good examples of evaluation for adaptive management we can learn from? Or guidance?

If you’d like to be involved in the discussion further with Better Evaluation and help with the development of an Adaptive Management Option page, please register your interest and let them know what examples, advice, resources or questions you’d like to share.

The Australia-Indonesia Partnership for Pro-Poor Policy: The Knowledge Sector Initiative (KSI) is a joint programme between the governments of Indonesia and Australia that seeks to improve the lives of the Indonesian people through better quality evidence-informed policy making. The programme is working with Bappenas (Indonesian Ministry of National Development Planning) to assist research institutions to improve the quality and relevance of their research; improve better communication of research results to inform public debate and policy making processes; identify and mitigate systemic barriers that limit interaction between knowledge production, intermediation, demand and use. The first phase of the programme started in May 2013 and will end in June 2017.

KSI is built on the hypothesis that greater use of research evidence improves public policy and does not have ready-made solutions to the problems in the evidence-based policy making space in Indonesia. It is a programme that has to find solutions that work through trial, errors and learning.  In the following conversation between Fred Carden and myself, we discuss what learning means in an adaptive programme such as KSI.

Arnaldo Pellini I have been thinking lately about the meaning of Learning in a programme like KSI which aims to be problem-driven and adaptive. Recently Duncan Green wrote that ‘adaptive management, seems to be where the big aid agencies [and large programmes?] have found a point of entry into the whole ‘Doing Development Differently’ debate.’ There is a lot of talk around Learning in this attempt to find a different way to address ‘wicked hard problems’.

A couple of weeks ago I stumbled upon a quote by Denis Diderot ‘There are three principal means of acquiring knowledge… observation of nature, reflection, and experimentation. Observation collects facts; reflection combines them; experimentation verifies the result of that combination’. It made me think about the implementation of KSI and the process of transforming raw data from observations into knowledge and learning to be used by others. I also thought about the many meanings and interpretations that are given to Learning by all the individuals and stakeholders who gravitate around the KSI programme. I came to the conclusion that establishing and developing a learning function that satisfies everybody is almost impossible because the demand and needs and interest are very different. What do you think?

Fred Carden Interesting question. Let’s start from what KSI’s learning approach means. To look at that I will contrast it with the approach that was (appropriately) used in the design phase for KSI (back in 2010-2013) and which was assumed by many would continue to be central in how KSI operated. That did not turn out to be the case. KSI was designed largely through the conduct of diagnostics to get a better handle on the issues and problems at play and help identify some key starting points and key actors in the knowledge sector. It did that well.

If we go back to our dictionaries, diagnosis is about identifying a problem, be it a disease, or an institutional or legal barrier. As such it is positivist; it is about getting clear on what is happening. Diagnosis is not treatment. Diagnosis is not about doing something to address what has been identified or clarified; it is merely about our best guess on what the problem actually is – not to dismiss or downgrade diagnosis, but simply situate what it does and does not do. Doing a diagnosis is useful because as we learn our diagnosis changes. But we don’t learn by doing diagnosis. A diagnostic approach can identify some of the key problems and opportunities through a number of tests or studies. At some point, an agreement is reached that sufficient diagnosis has been carried out to identify the problem(s).

AP Interesting metaphor. So, the diagnosis is not an end in itself. It is a means to an end, that is to generate and accumulate evidence to better understand and unpack the problems a programme will face. It kick starts the process of thinking of possible solutions. You are also saying that diagnosis is more than an assessment done at the beginning of a programme, it is iterative.

FC Yes. In the case of the knowledge sector, the diagnostics mapped out a response to the diagnosis with the design of a fifteen-year years programme in three phases, that is KSI.  Part of the diagnosis being that the problem in the Indonesian knowledge sector would take a long time to treat because of the multiple manifestations of the problem in different sectors and at different levels. The diagnosis did not suggest taking two aspirins and calling again in the morning. It said you would be at this a long time and suggested there would be many twists and turns along the way. It also suggested that diagnostic studies would continue throughout implementation of the programme. The diagnostics and experience also suggested that there was a need for much stronger ownership of the process in local institutions and organizations. So to move forward, KSI moves beyond diagnostics to the “doing” – and that is where the learning happens.

AP I can see a risk here, especially if researchers are involved in a programme, and that is that diagnostics are never enough. If they are iterative, they almost certainly raise new questions that can be studied further. In a sense, the diagnostic and learning that comes from it may never end.

FC I agree, the risk is there, but remember learning and better understanding does not come from diagnostic studies. Learning comes from doing:

Doing is the precursor to learning and learning is the precursor to developing a robust vision for the work to be done going forward. Patrizi & Patton, 2009, IDRC.

We had excellent diagnosis to start with but we knew that a diagnosis could only peel back so many layers of the complex problems of the knowledge sectors; these are many, such as poor quality of research, low capability to demand evidence in government organisations, insufficient incentives for academicians to publish in international journals, etc. Learning comes also from testing things out, learning from what happens, revising and testing again. If we go back to our dictionary, learning is the activity or process of gaining knowledge and skills by study or practice – put another way, modification of a behavioral tendency by experience (as opposed to conditioning). As we learn we move into new territory, so some new diagnostics are needed, but always with an eye on how they help us move towards our objective.

AP Problems or solutions? Where to start and what does it mean for learning? I find these questions interesting on multiple levels: adaptive development, alternative development processes, monitoring and evaluation, knowledge management, etc. I recognize that I am biased and believe that it is important to start from problems first, with an open mind about what solutions can be. Starting from solution (i.e., expert solutions) has been one of the main problems that development has had in the last 40+ years, particularly on building state capability. But what you are saying is that it is also possible to start from solutions with an open mind. All of which is also conducive to learning.

FC A learning approach starts from the position that we have some idea of what we think the problem is (diagnosis is problem-driven), and some ideas about what to do about it, some possible solutions. When I say ‘we’, I do not mean one group (such as a programme team) with one view and one set of ideas, but a number of different stakeholders who agree on the general parameters of the problem they want to solve and may want to test different ideas and different approaches. The path forward in a learning approach is to take some of these potential solutions and test them, rather than focus in the first instance on a redefinition of the problem. So I say solution-driven because the whole point is to experiment and find possible solutions. But these need to be based on evidence of what the problem seems to be. Too often research is happy with its diagnosis, but  to me, the issue is how we use that diagnosis for change.

It also means being open to the potential solutions of others and where it makes sense, integrating and adapting others’ ideas and approaches. As we test, we refine the problem and refine the solutions. Through testing and interaction with the problem, elements of it become clearer and different elements take on more importance than others, sometimes because they are more tractable and therefore help create momentum or a vision for the work to be done going forward, sometimes because we did not see them until we started the work. A learning approach is iterative, going back and forth between problem and potential solution. As we test things out we also redefine and refine our understanding of the problem. But the key point is that a learning approach doesn’t stay focused on only the problem, it goes to solutions.

Documenting our learning is the major activity in which KSI is engaged at the moment. The evaluation products such as Stories of Change and their synthesis, the KSI Performance Story, the organizational assessments of policy research institutes, the documentation of practices that have worked elsewhere all contribute to the learning that is informing directions for the next phase of the programme.

AP In a sense it is not so much a dichotomy between problems and solutions, rather the acknowledgement that a learning-based approach is the common denominator between problem analysis and solution testing.

FC A learning-based approach does not negate the need for periodic diagnostics. But the learning really happens when we try something out and we see what works, why and for who. We can use that learning to adjust, carry out new diagnostics or sometimes continue on the same path. It intends to be more partner driven and is focussed on identifying the mechanisms that will strengthen the knowledge sector over the long term. It involves trial and errors which brings us to the point that learning has opportunity costs. We have to recognize these, but I would argue that the resources spent on learning help ensure we waste much less of our precious programme resource. Some of the working papers that KSI has presented are diagnostic in nature, such as the partner review of university barriers led by Yanuar Nugroho. Some working papers present solutions that have been tested and tried out elsewhere and which could generate ideas for possible solutions in the context and with the partners with whom KSI works (e.g., Managing a Government Think Tank: Inside the Black Box; Investing in Evidence: Lessons from the UK Department for Environment, Food and Rural Affairs).

AP To conclude, it is not so much about whether to start from problems or solutions, in terms of acquiring learning that informs plans and actions. Both are good options.

FC Yes, that’s right. We need some thoughtful, locally-driven problem definition, but we also need to make some forays into possible solutions. I would add that what matters is also who speaks of the problems and who is suggesting the possible solutions. The KSI programme tries to give ownership and control of the analysis of problems and the identification of solutions to partners through core grants and other mechanisms.

Complex interventions are well… complex. There are no easy solutions and even understanding the problems can be challenges. As someone once defined complexity – ‘You don’t solve complex problems, but you know you are making progress if you like your new problems better than your old ones.’ The essence of a learning approach is continuous adaptation and evolution of problem and solution.

comment 0

Here is what we learned about learning about PDIA (Part II)

Written by Arnaldo Pellini, Endah Purnawati, and Siti Ruhanawati

During the last couple of months we have gone through the six modules of the online PDIA course developed by Matt Andrews et al. We want to share here what we have learned about learning about Problem Driven Iterative Adaptation (PDIA). You can find Part I blog with some of our early reflections here.

bsc_book_border_2We work in a relatively large programme which aims, with its partners, at strengthening the capabilities and  systems of evidence-based policy making in Indonesia. It is a programme to which a PDIA approach seems particularly suited (See Knowledge Sector Initiative).

I (ie Arnaldo) have done the course a year or so ago and found it super-interesting. Ana and Endah did not take part at the time but really wanted to discover what PDIA is and how it can help their work.

When I took part in the original course, I saved the reading material and assignments into folders: a folder for each module. I also created playlists in YouTube which store the video lectures in the right sequence.

The six modules of the course are:

  • Capability for Policy Implementation
  • Techniques of Successful Failure
  • Building the Capability you Need
  • PDIA to Escape
  • Constructing Problems
  • Deconstructing Problems

We decided to meet every second week. I shared the reading material, assignment, and video playlist for the first module and two weeks later we met to discuss about it. For me this was also a good opportunity to go through the material again.

After the second module Ana and Endah realized that it was quite difficult for them to make time especially for the readings. Two weeks seemed a good amount of time for one module, but due to the workload in the programme, they found themselves having to rush the reading during the last couple of days before we would meet again. It did not work well.

At Module 3 we decided to try something different.

We watched the module’s videos together. This proved to be the best way to go about learning about PDIA and we continued in this way until Module 6.

Our key take away from learning about PDIA are:

  • Watching the videos together, pausing them at a particularly interesting point or rewinding them when something was not clear, gave us the opportunity, not only to learn about the principles and key concept underpinning PDIA, but also to discuss right away what those principles and key concepts mean for our programme and our work. Is PDIA feasible in our programme? What are the challenges? Which areas of our programme deal with wicked hard problems? Which ones with logistical problems? Do we come up with three possible solutions for each problem we identify? and so on.
  • The approach emphasized by PDIA which pushes us to get really to the roots of the problems, can help to assess whether we are dealing with real problems or just solutions presented as problems.
  • When we identify a real and concrete problem we can be confident that we can think and find some solutions.
  • PDIA requires acceptance and space to be applied in a programme. This requires programmes to have the flexibility to try out solutions and accept that these may not succeed.

We shared these takeaways to the programme in brown bag lunch at the office. The questions that the colleagues asked us resonate with the discussions which are taking place in the PDIA and DDD blogosphere about the next steps of PDIA:

  • When testing a solution how long do you go before you decide it is not working ?
  • How to reconcile annual plans and budgets, approvals from steering committees, regular reports to the funder with this way of working? Maybe by developing sufficiently general plans, get them approved, and allow for modifications within certain parameters?
  • How do you engage government partners in this way of working?
  • … and more

The learning about PDIA continues here in Jakarta.

 

P.S. In case you wonder what the photo at the beginning of the blog has to do with PDIA, the answer is: not much. Being a bicycle commuter in Jakarta, that is my personal campaign for bicycle lanes in the city. The absence of which, now that I think about it, can be seen as a wicked hard problem to solve.

Photo credit: Charles Wiriawan, Flickr
comment 0

Indonesia’s PISA results show need to use education resources more efficiently

Indonesia’s result in the OECD Program for International Student Assessment, or PISA 2015 report, shows some improvements in the skills of students. In particular, girls are performing better than boys in all subjects: science, literacy, mathematics. They are significantly better in reading.

From 72 countries and economies reviewed every three years, Indonesia ranks 62nd, a slight improvement compared to 2013. Indonesian students ranked the second lowest in the 2013 PISA ranking (71), worse than their ranking in 2009, when Indonesia ranked 57th.

The improvement of girls’ performance has helped to lift Indonesia’s ranking. But, overall, the performance of Indonesian students (girls and boys) in science, mathematics and reading is one of the lowest among PISA-participating countries with an average ranking of 62 out of 69 countries. (Three of the 72 assessments involved city groupings: Buenos Aires (Argentina), Beijing-Shanghai-Jiangsu-Guangdong (China) and Hong Kong (China).)

The score difference in the test in science, mathematics and reading between the 10% of Indonesian students with the highest scores and the 10% of students with the lowest scores is one of the smallest among PISA-participating countries. Indonesia’s average ranking across the three subjects is 65th out of 69 countries.

Inequality and school performance remain an issue in Indonesia. The percentage of low performers in science among disadvantaged students is among the highest globally. Indonesia ranks ninth among the countries on this measure of inequality.

Indonesian students overall do not seem to value or see opportunities in becoming a scientist. Only a small share of disadvantaged and advantaged students in Indonesia expect to pursue a career in science compared to other countries participating in PISA. Among the few who want to pursue a career in science, Indonesian girls are significantly more likely than boys to want a science-related career.

Assessing students’ ability to apply knowledge

For the PISA report 2015, 540,000 students (representing about 29 million 15-year-olds) in 72 participating countries and economies sat the test.

PISA focuses on core school subjects such as science, reading and mathematics. It does not assess whether 15-year-old students can reproduce knowledge in those subjects. Instead, PISA assesses the extent to which students who are getting near the end of their compulsory education “have acquired key knowledge and skills that are essential for full participation in modern societies”.

The underlying assumption of the PISA testing is that “modern economies reward individuals not for what they know, but for what they can do with what they know”.

The findings are a rigorous and systematic source of evidence that is meant to “provide evidence to policymakers about the knowledge and skills of students in their own countries in comparison with those in other countries … it can help countries to learn from policies and practices applied elsewhere.”

The top five countries in the PISA ranking are Singapore, Japan, Estonia, Chinese Taipei and Finland.

Do students from high-income countries perform better?

The PISA results show that students in high-income countries and economies do not necessarily perform better. Vietnam ranks eighth, higher than Australia and other OECD countries.

While high-income countries and economies (with per capita GDP above US$20,000) have more resources to spend on education, what seems to matter most in terms of quality and equitable education is the management of educational resources and the quality of teaching.

The PISA report states:

“When teachers frequently explain and demonstrate scientific ideas, and discuss students’ questions, students score higher in science and have stronger beliefs in the value of scientific enquiry and are more likely to expect to work in a science-related occupation later on.”

In the case of Indonesia, the resources are there. Education dominates social spending and 20% of the budget has to go to education.

The PISA Report 2015 has found that Indonesia’s quality of schools’ educational resources is one of the highest (fourth out of 69) among PISA-participating countries and economies. Having said that, this does not mean that schools in Indonesia have all that they need. Some regions are still poorly equipped.

What does this mean for Indonesia?

Primary and secondary education is the foundation of the Indonesian knowledge sector. Primary and secondary schools have to equip students with the analytical and critical thinking skills required to become scientists. They also have to inspire students to want to become researchers in any field.

The PISA results show that Indonesia is still struggling. Without skilled students leaving compulsory education, Indonesian universities will not be able to expand and strengthen their research programs and improve their international standing.

The private sector will struggle to compete internationally in knowledge creation, thus making it more difficult for Indonesia to transition to a knowledge-based economy. Fewer scientists and researchers means less knowledge and research is available to inform policy decisions.

The PISA Report 2015 shows only a slight improvement in the skills of Indonesian 15-year-old students. This does not correspond to the amount of resources that the government invests in education.

In August 2015, at the International Conference on Best Development Practices and Policies organised by Indonesia’s Ministry of National Development Planning, Michael Woolcock, lead social development specialist at the World Bank, said in his keynote speech that middle-income countries like Indonesia have solved the “development 1.0 problems”. Schools have been built, key legislation and polices are in place, teachers have been recruited in large numbers and data are being gathered and analysed.

The solution to the problems highlighted in the PISA report 2015 is not more resources. Rather, we need to use existing resources more efficiently.

This is a development 2.0 problem. There are no blueprints or grand solutions to these problems.

More likely, Indonesia needs thousands of small answers informed as much as possible by good-quality research into what the problems are, what works, what does not work (and why).

This article was originally published on The Conversation. Read the original article

Photo credit: Charles Wiriawan, Flickr

The Conversation

comments 3

What are we learning when we learn about Problem Driven Iterative Adaptation (PDIA)? (Part I)

Written by Arnaldo Pellini, Endah Purnawati and Siti Ruhanawati  

For the last three weeks I have met on with two colleagues, Ana and Endah, to discuss PDIA. We are going through the modules of the PDIA: Building Capability by Delivering Results – Part I online course which Matt Andrews, Michael Woolcock, Lant Pritchett, and Salimah Samji run in 2015 and 2016.

Every Monday morning we meet to discuss the weekly module. I did the course last year and it is great to go back to it, refresh my memory and, more importantly, hear from Ana and Endah how it all relates to the context in which we work here in Indonesia.

So far we have gone through the following material:

  • Challenges of building state capability
  • The poor results achieved by international programmes during the last 20 years in building state capability, particularly in middle income countries
  • Isomorphic mimicry, or the establishment of institutions that look good on paper but do not actually function
  • The hurry in establishing new institutions so that they fit project cycles but crush under the weight of too much too soon (i.e. premature load bearing)

Here some of the points that have emerged in our weekly discussion so far:

  • Endah says that in our work we often do not look at problems carefully. We do not spend enough time to actually explore and understand the roots of the problems and tend to take a shortcut in suggesting solutions while the problems have not been explored well. This makes us less interested in learning for better implementation because we then are too occupied to implement solutions.
  • There is a lot of isomorphic mimicry around us. A lot. Policies and regulations are being designed, supported and signed off all the time. Often contradicting or duplicating each other. The example in the course of the driving licence office in Delhi is not an exception, but probably the norm.
  • We found the statistics on the lack of progress in state capability quite astonishing. Pritchett notes that ‘The state capability for service delivery is the aggregation of the capability of the organizations acting at the behest of the state (which themselves could be public or private) to provide the service … A central idea in development was that there could be ‘accelerated modernization’ as the lagging countries caught up on the leaders. Strikingly, growth in state capability in the development era appears to be even slower than the now developed countries.’
  • The Quality of Government Institute assesses that 62 of 87 countries which they track show no progress in state capability. Even among the 25 of the 87 countries that have positive growth in state capability the typical country will have taken over 200 years to reach the OECD level of state capability in 1985.
  • Economic growth over short to medium horizons is almost completely uncorrelated with improvements in state capability (and some argue that growth is associated with reductions in state capability).
  • Pritchett concludes that ‘In a nutshell, my argument is that the existence of very high capability Weberian bureaucracies in the developed countries during the development era has made building state capability harder, not easier, for the developing countries … The existence of high-performing Weberian bureaucracies in the developed world which, by the beginning of the development era appeared to and were claimed to operate on the basis of a successful formula, gave credence to the idea that ‘transplantation’ of the formula and rules of those organizations could replicate their functional successes. The idea and possibility of transplantation abetted the desire of new nation-states and their rulers and of the post-war global order generally to ‘skip the struggle’
  • Ana asks why are there only middle-income countries examples in the PDIA course? Why aren’t there examples of isomorphic mimicry in high-income countries? I can think of many in my home country, Italy.
  • Every time we met so far we have ended up talking about political philosophy, human nature, ethics, morality. The questions we are asking asked ourselves which are inspired by the course modules are: why are there so many problems exist in state capability. Why running a state leads to so many problems and failures? What has human nature to do with it since, after all, it is people who vote, who make the rules and who govern.
  • Endah has mentioned Machiavelli who in Il Principle writes:

Any man who tries to be good all the time is bound to come to ruin among the great number who are not good. Hence a prince who wants to keep his authority must learn how not to be good, and use that knowledge, or refrain from using it, as necessity requires

and

One can make this generalisation about men: they are ungrateful, fickle, liars, and deceivers, they shun danger and are greedy for profit

  • I have been reminded of a book by Eugenio Scalfari, and Italian journalist and the founder of the second largest newspaper La Repubblica, which I have been reading during the last couple of weeks. Scalfari is now 92 and writes book about politics and philosophy. In his latest book, L’allegria, il pianto, la vita (Joy, tears, life) he writes: ‘In teoria, la sostanza della politica è il perseguimento del bene comune (In theory, the essence of politics is the pursuit of the common good). This is the theory. Does it correspond to reality, he asks?
  • To find an answer he then asks: ‘Given the essence of human nature, is peace or war the normal condition of our species?’ Wars are about winning power and since in the history of men wars have been more frequent than peaceful times he concludes that the normal condition of humans is war, not peace. …. So, ‘if war is the normal conditions (in historical terms) power is the dominant passion in human beings. While politics is about the common good, at its core, however, human nature is about pursuing power.’

Not a nice picture (so far). Three more modules to continue this interesting discussion.

comment 0

Indonesia’s knowledge sector is catching up, but a large gap persists

Helen Tilley, Overseas Development Institute and Arnaldo Pellini, Overseas Development Institute

Academic publications are important reflections of the strength of the research community in a country. A strong research community fuels innovation in the economy. It’s also the bedrock for generating high-quality evidence to inform policy decisions.

Indonesia, the largest economy in Southeast Asia and the fourth-most-populous country in the world, wields substantial economic and political influence in the Asia-Pacific region. It has the potential to make important contributions through academic research and the dissemination of knowledge emerging from Indonesian universities.

In the last four years Indonesia has rapidly increased its academic publications output. Indonesia’s publication output increased tenfold with an average annual growth rate of 15%, growing from 538 in 1996 to 5,499 in 2014.

fig-1

This may ultimately help Indonesia produce high-value goods for export, such as chemicals, electronics and bio-medical manufacturing. It would also quicken its transition to a middle-income country.

As Lord Nicholas Stern noted:

Whilst creativity, ideas and questioning are of value in their own right, economies and societies which invest more in research generally show faster rates of growth in output and human development.

Still behind

However, Indonesia still has a lot of catching up to do to be on par with other countries in the region and other middle-income countries in publishing academic articles.

Between 1996 and 2008 Indonesia published just over 9,000 scientific documents. That figure places Indonesia more than 13 years behind other lower-middle-income countries like Bangladesh or Kenya.

Indonesia trails even further behind neighbouring upper-middle-income countries such as Thailand and Malaysia or high-income countries such as Singapore.

Singapore, South Africa and Mexico still each produce three times as many academic publications as Indonesia.

fig-2

The low production of academic papers by Indonesian research institutions is one of the symptoms of a weak knowledge sector.

In 2014 Indonesia accounted for only 0.65% of academic publications in the ASEAN region. It produced just over 0.2% of global publications. Compared to the size of the economy and population of Indonesia there’s a substantial gap between actual and potential research output.

Indonesia produces the lowest number of academic publications per US$1 billion of GDP (2.2 publications per US$1 billion of GDP), compared to neighbouring ASEAN countries and partner countries of the G20. The Philippines produces 2.7 and Vietnam 7.2 academic publications per US$1 billion of GDP.

fig-3

Indonesia has also failed to maximise the potential for international collaborations in recent years. International collaborations help scientists to access knowledge and expertise, and apply them to local problems. They also enhance domestic scientific capabilities through the exchange of knowledge and experience.

Until 2011 67% of publications involved co-authorship, but by 2014 this had fallen to 44%. Previously, Indonesian authors were more collaborative than authors from countries with much higher publication output.

fig-4

Indonesia’s potential

If Indonesia continues to produce academic publications at the current rate it may eventually overtake other ASEAN countries such as Vietnam, the Philippines and Malaysia. Indonesia may also overtake key G20 partners such as South Africa and Mexico, which have had lower growth rates.

Indonesia’s academic articles are also informing other research. Other researchers are citing more and more academic articles by Indonesian academics.

Between 1996 and 2011 Indonesia’s average annual increase in cited publications was 16%. This is lower than China and Singapore. But higher than the Philippines, Vietnam, Malaysia and other countries.

This does, however, reflect a lower absolute increase in cited publications compared to other middle-income G20 economies given the smaller total publication output of Indonesia. There is still progress to be made.

fig-5

Indonesia’s researchers have shown progress in producing knowledge. But it must catch up to close the gap in academic publications with other countries.

To do so, Indonesia has to continue building a culture of research in its universities. This means funding basic research and innovation.

Government organisations should commission research directly from Indonesian universities and research centres to support public policy decisions. The government should also create incentives to promote private and philanthropic investment in research.

Indonesia has made an important start on funding research through the creation this year of the Indonesian Science Fund. This is the first competitive, peer-reviewed research fund in the nation.

Changes in regulations and rules are needed to guide research commissioning to support public policy. There should also be a change in attitude and expectations among policymakers.

Here too there are signs of progress. The government is considering changing procurement regulations to incentivise policy makers to commission research from Indonesian universities and research institutes.

All of this points to a cultural shift that values research. Creating a culture of research in universities cannot be done by researchers alone. It needs leadership from the government and university rectors, and clear signals that research is valued and used.

Academic publication is the visible indicator of a healthy research environment. As the culture of research is built and the research environment grows, publications will grow. Then we will see Indonesia catch up with – and perhaps surpass – other countries in the region and produce the knowledge and research evidence required by a rapidly growing economy to innovate.


This piece was co-authored by Fred Carden, Principal in Using Evidence Inc and Senior Research Advisor to the Knowledge Sector Initiative, a joint program between the Indonesian and Australian governments supporting the use of better research, analysis and evidence in public policy.

The Conversation

Helen Tilley, Research Fellow, Overseas Development Institute and Arnaldo Pellini, Research Fellow, Overseas Development Institute

This article was originally published on The Conversation. Read the original article.

comment 0

On demand and use of evidence in policy making: very interesting experiences from South Africa

I just want to share three very interesting papers on demand and use of evidence processes and systems written in collaboration by colleagues of the RAPID team in ODI and government officials.

cover-wpUnderstanding the organisational context for evidence-informed policy-making

Louise Shaxson, Ajoy Datta, Mapula Tshangela and Bongani Matomela

November 2016

Efforts to improve the use of evidence in government policy-making across the world have tended to focus on different groups and organisations. But while a good deal of work has been done to improve the supply of evidence from entities such as research centres and academia, less attention has been paid to improving demand for, and use of, evidence by government policy-makers.
This working paper describes the framework used by a team of ODI researchers and officials from the South African Department of Environmental Affairs to analyse how DEA’s internal structures and processes and the external policy environment in South Africa affect how its policy-makers source and use evidence. This paper outlines, systematically, the detail of the issues we believe to be important to understanding how and why a government department operates when it comes to evidence, drawing on the authors’ direct experiences of working on evidence in the UK’s Department for Environment, Food and Rural Affairs (Defra) and DEA.
This is the first in a series of documents that have been developed as part of the VakaYiko Consortium project, supporting DEA in South Africa as it embeds and enhances an evidence-informed approach to policy-making. It has been jointly produced by a team from DEA and from ODI.
Alf Wills, Mapula Tshangela, Narnia Bohler-Muller, Ajoy Datta, Nikki Funke, Linda Godfrey, Bongani Matomela, Gary Pienaar, Nedson Pophiwa, Louise Shaxson, Wilma Strydom and Ke Yu
November 2016
Since 2008, South Africa’s Department of Environmental Affairs (DEA) has made a concerted effort to enhance its systems for using evidence to inform how it diagnoses, develops, implements and reports on policy. This report synthesises the organisational issues that influence these interactions. It is based on the findings of five studies that were conducted as part of a programme of support to DEA between 2014 and 2016. Many examples of good practice were unearthed in the studies – examples that deserve to be shared more widely. The report also identifies areas that were observed to be limiting DEA’s ability to make better use of its evidence.
This is the second in a series of documents that have been developed as part of the VakaYiko Consortium project, supporting DEA in South Africa as it embeds and enhances an evidence-informed approach to policy-making. It has been jointly produced by a team from DEA and from ODI.


Alf Wills, Mapula Tshangela, Louise Shaxson, Ajoy Datta and Bongani Matomela
November 2016
Approaches to evidence-informed policy-making must be flexible and pay equal attention to the quality of the processes through which evidence is sourced and used, as well as the quality of the evidence itself. However, there are often common concerns, within and across government departments – for example, using the full range of high-quality evidence that is available, using budgets efficiently, building relationships and ensuring wide participation, and anticipating future evidence needs.
This report derives from work done with the South African Department of Environmental Affairs between 2014 and 2016. Informed by existing good practices identified in DEA, it proposes five guidelines and sets of good practice that could underpin a systematic and phased approach to improving evidence-informed policy-making within a government department.
This is the third in a series of documents that have been developed as part of the VakaYiko Consortium project, supporting DEA in South Africa as it embeds and enhances an evidence-informed approach to policy-making. It has been jointly produced by a team from DEA and from ODI