17 Nov Educational Sustainability and Measure
The first metric of sustainability is demand, both by the university lecturers, as well as by students, career professionals, and governments. If demand is high and is seen to be usefully met, the institutional, human and financial resources needed to meet it are more likely to be mobilized over time.
The capacities to sustain such efforts are fostered by involving the institutions and key staff in the knowledge creation and organizational learning process from the outset.
Interventions in capacity raising are nonetheless often costly. The knowledge that is contained in the events often decays quite rapidly unless regular resources sustain it.
Educational approach addresses this problem partly by working with the existing organizations that provide education and training to the target groups most influential to the reform process. In this way funding volatility does not undermine the survival of the capacity raising activity.
Educational networks provides a series of services and products to the networks that lower the cost to the network of developing new capacity raising tools and shared teaching resources.
Educational works in an area of social reform that has traditionally been resistant to attempts at measurement of the quality and effectiveness of its activity. Outputs are often measured only in terms of the programmed logic itself or in vague terms. Part of this is due to the complexity and cost of measuring the impact of capacity raising initiatives, part perhaps a negative incentive not to address the issue for many apparently good reasons.
Educational activity is developed from an analysis of the reform agenda that asserts that more knowledge needs to be disseminated in order to advance the reform agenda globally. In order to do this codified knowledge must be fused with tacit knowledge and appropriated and developed by the communities seeking reform.
This approach is consistent with the underlying approach that knowledge rather than information needs to be disseminated to support pro-integrity reform.
It is not practical to base indicators of own effectiveness on the levels of education or reform in a given country. The causal Links are too blurred for this to be possible. What education can do is to base an assessment of its performance based on the logic of its activities and the extent to which its activities are consistent with that logic and are measurable in outputs. Indicators also have to be methodologically meaningful and practically realizable.
The Key Performance Indicators (for network partners are the cases of the Application of knowledge and skills to education work in defined settings.
As an organization concerned with knowledge management indicators the work need to answer some key questions concerning activities and the impact they have on knowledge:
1. What is the degree of the activity? Has education been active in the number of initiated projects ? What has been the activity in terms of end-user outputs of these projects? How widely have the results of the activities been disseminated?
2. What is the quality of the knowledge generated by these activities? To what extent is the process of developing knowledge from information and the blending of tacit and codified knowledge achieved through educational networks activities?
3. What is the sustainability of the knowledge management process? How well has the transition to local ownership been achieved, which is also a good indicator of demand? Is the process shallow or deep (Le. what sort of network and organizations are supporting the initiative locally)?
The measurable indicators are the following:
No. of partners.
No. of courses developed (in-country and stand alone).
No. of scholarships attributed.
No. of policy papers co-ordinate.
No. of commissioned papers.
No. of countries of activity.
No. of dissemination activities and resources.
No. of users of the website.
No. of replications, adaptations of Tiri courses.
No. of websites referencing and containing educational materials.
No. of partnership organizations.
No. of codified sources employed.
No. of types of codified sources employed.
No. of tacit sources of knowledge introduced.
No. of new sources of tacit and codified knowledge created.
No. of institutions of high standing seeking assistance.
Type of organizations involved.
Range of engaged actors and organizations.
Declining long term input.
Local sources of funding.
Double loop learning in practice.
Influence on policy.
None of these indicators are sufficient in themselves. But together they represent a coherent response to the motivation for the organization and thus a satisfactory goal for the development of an index. Each of these indicator themes will be displayed in spiderdiagram, to indicate relative strength and weakness.