Share
Initial Publication Date: April 13, 2015

Logic Models

This web page was written by Carol Ormand.

Logic models are conceptual models of how a program works. (In this case, I am using the term "program" very broadly -- see the next paragraph for details.) Such models can be captured in concept maps, like the diagram to the right. Logic models illustrate program components in context, showing who is affected by each program element, what the anticipated effects of that program element will be on the participants, and any anticipated ripple effects from those participants affecting others around them. A well-developed logic model articulates, step by step, how a proposed program will achieve its goals. Developing a logic model is an excellent way to check that your program has a strong design: that is, that the program is likely to achieve its goals. Having a logic model makes program assessment straightforward.

Program, in this context, can mean almost anything you plan to do. It could be a degree program or an internship program, but it could also be a new plan for recruiting majors or a website or a learning community that you are going to build or a fundraising effort you are going to undertake.

Why Use a Logic Model?

Three key reasons for using logic models are

  1. to clarify your chain of logic about the outcomes and impacts of what you are doing, for yourself and for others. This will illuminate any weaknesses in that chain of logic, and allow you to adjust your proposed program in advance, so that you do not waste your time and energy on ineffective activities.
  2. to facilitate the collaborative development of your program. By sharing your preliminary logic model with interested parties, you can elicit their input on how to make it better.
  3. to facilitate assessing the effectiveness of your efforts.

In short, logic models are tools for project planning and evaluation, allowing you to think through the elements of your program and what each element is likely to accomplish, and then to measure whether it did, in fact, accomplish what you expected.

Evaluators John McLaughlin and Gretchen Jordan describe it like this:
A program can be thought of as a hypothesis: if a program is implemented, then the expected results will follow. Logic modeling is a tool that can be used to unpack this hypothesis in order to understand the underlying assumptions and create strategies to test the hypothesis.

Whether you have written it down or not, if you have a plan of action and expectations about the outcomes that will result from those actions, you have at least a rudimentary logic model. Writing it down and sharing it with others allows you to get feedback on the strengths of your logic, on other possible strategies for accomplishing the same goals, and on possible unanticipated outcomes of your current plan. It can also be a simple communication tool, allowing you to share your plans and goals with others, including institutional administrators.

How to Develop a Logic Model

To develop a logic model, start by articulating your goals, and work backward to the resources and activities that will make it possible to reach those goals. These questions can guide you through the process:

1. What results do you want to achieve?

Articulating your intended results is the first step in developing a logic model. You need to know where you want to go before you can develop a road map for getting there. Knowing your intended destination also allows you to monitor your progress along the way.

Results can be subdivided into outputs, outcomes, and impacts, based on the timeframe involved:

  • Outputs are direct evidence that you've done the activities you plan to do. For example, if you plan to survey your alumni about how well your curriculum prepared them for their current jobs, the survey instrument and survey responses would both be outputs.
  • Outcomes are the results you want to see in 1-3 years (short term) and 4-6 years (long term). If you administered your survey with an eye to curricular revision, short term outcomes might include new content in existing courses, the development of new courses, and changes to the course sequence required for your degree program. One long term outcome might be that a higher percentage of your graduates find employment in the geotechnical and environmental industries.
  • Impacts are the results you want to see in 7-10 years. For example, if you made the curricular changes mentioned above with the intent of better preparing your students for the geoscience workforce, possible impacts could include regional and institutional recognition of your program as a leader in this area.

In terms of articulating your outputs, outcomes, and impacts, begin with the impacts you want to achieve, and work your way back toward outputs. The impacts are your ultimate destination; the outcomes are just markers along the way. So, in the example laid out above, the starting point would be deciding you want to be recognized as a leader in preparing students for geotechnical and environmental jobs. You would realize that to be recognized as a leader, you would need to produce graduates who, in fact, find employment in those fields. Knowing that, you would plan to make some changes to your curriculum to make your graduates more attractive to employers in those fields. Finally, recognizing the need for curricular changes but also realizing that you don't know what specific changes would be most effective, you might choose to survey your alumni with some targeted questions about your program.

Whatever outcomes and impacts you want to achieve, they need to be SMART: Specific, Measurable, Action-oriented, Realistic, and Timed. Consider what that means for one of our examples, above: a higher percentage of your graduates find employment in the geotechnical and environmental industries.

  • Specific: Is this outcome specific? Yes, but it could be more so. What percentage of your graduates would you like to have gaining employment in the geotechnical and environmental industries? 25%? 50%? 75%?
  • Measurable: Is this outcome measurable? Sure. You will first need to know how many of your current graduates find employment in the geotechnical and environmental industries. And then you will need to track your students as they graduate. And you will need to choose a timeframe. How soon after graduation?
  • Action-oriented: Is this outcome action-oriented? That is, are there actions you can take that will lead to this outcome? Absolutely. Surveying your alumni is one possible action. Talking to potential employers in your area about the skills they look for in new hires is another. There are many other possibilities, as well.
  • Realistic: Is this a realistic outcome? Probably. Unless most of your graduates already find employment in the geotechnical and environmental industries, and if your faculty have the expertise to teach the skills your students will need to gain employment in those industries, it is realistic.
  • Timed: Is there a timeframe for this outcome? Not inherently, but you can set one. How soon after graduation will you contact your graduates to see whether and where they are employed?

Once you have articulated your outcomes and impacts, it is well worth the time to go back to each one and ask yourself whether it fulfills each of these qualifications. If not, some revisions may be in order.

2. What activities will produce those results?

Once you have articulated -- and refined -- your goals, it is time to think about how to reach them. At this point, pay attention to what works! Do a literature review. Search the Building Strong Geoscience Departments website, too. Talk to colleagues in other departments who have achieved whatever you want to do. Ideally, you'll come up with a list of possible paths to your goals.

Next, give some thought to whether the activities will work in your particular setting. How will these activities change what is currently happening in your department, on your campus? Think about the return on investment, too: what activities will give you the biggest results for your investment of time and energy? Are some pathways more appealing than others? Choose activities that you will enjoy, or at least won't mind doing.

When you think you're done, go back and check: are the activities you've chosen clearly linked to the program goals? That is, if you do what you're planning, can you reasonably expect to achieve the results you want? Are they do-able, given your resources and time constraints?

3. What resources will you need to conduct those activities?

And, speaking of resources and time constraints, what resources will you need to conduct the activities you've chosen? Try to think through all of the steps involved, and think about what you will need in each of the areas below, and any others that come up as you plan:

  • Time
  • Personnel
  • Partners & collaborators
  • Technology & materials
  • Space
  • Knowledge

Using a Logic Model to Evaluate a Program

Having articulated your intended short and long term outcomes and impacts makes it easy to evaluate your program. Start with short term outcomes, to help with formative evaluation. For summative evaluation, move on to long term outcomes and impacts. Strive to keep evaluation both meaningful and manageable, by asking yourself these guiding questions:

  • What are the key aspects of the program?
  • For what audience are we evaluating this program? (Our own department? Someone else on campus? Prospective students? Students? Alumni?)
  • What questions does that audience have about the program?
  • If we answer those questions, how will the information be used?

If your outcomes and impacts are, in fact, specific, measurable, and timed, it's simply a matter of gathering the appropriate data to answer the relevant evaluation questions. And having goals at multiple time scales allows you to check your progress periodically, letting you figure out whether and when you need to make any mid-course corrections. In the end, the detailed planning you do up front eases your workload later on.

References and Additional Resources

  • Bickman, L. (1987). Using program theory in evaluation. New Directions for Program Evaluation, No. 33. San Francisco: Jossey-Bass.
  • Bickman, L. (1990). Advances in program theory. New Directions for Program Evaluation, No. 47. San Francisco: Jossey-Bass.
  • Chen, H. (1990). Theory-driven evaluations. Newbury Park, Ca: Sage.
  • Innovation Network. Logic Model Workbook.
  • Kellogg Foundation (2004). Logic Model Development Guide.
  • McLaughlin and Jordan, 2004 . Logic Models: A Tool for Describing Program Theory and Performance: Chapter 1 in the Handbook of Practical Program Evaluation, Joseph S. Wholey, Harry P. Hatry, and Kathryn E. Newcomer (Editors). San Francisco, Jossey-Bass.
  • McLaughlin, J. A. and G. B. Jordan (1999). Logic models: a tool for telling your program's performance story. Evaluation and Program Planning 22: 65-72.
  • Mulroy, E. A. and H. Lauber (2004). A user-friendly approach to program evaluation and effective community interventions for families at risk of homelessness. Social Work 49(4): 573-586.
  • Rogers, P.J. (2000). Program theory: Not whether programs work but how they work: in D. Stufflebeam, G. Madaus, and T. Kellaghan (Eds.) Evaluation models. Boston: Kluwer Academic.
  • University of Wisconsin - Extension, Program Development and Evaluation. Logic Model.
  • Weiss, C. H. (1997). Theory based evaluation: Past, present, and future: in D. Rog and D. Fournier (Eds.), Progress and future directions in Evaluation: Perspectives on theory, practice, and methods. New Directions for Evaluation, No. 76. San Francisco: Jossey-Bass.