Feb. 18, 2026

Applied Outcomes: Designing CME for Learner Action

The player is loading ...
Applied Outcomes: Designing CME for Learner Action
Apple Podcasts podcast player iconSpotify podcast player iconCastro podcast player iconRSS Feed podcast player icon
Apple Podcasts podcast player iconSpotify podcast player iconCastro podcast player iconRSS Feed podcast player icon

You already know how to write learning objectives. You reference Bloom’s taxonomy. You understand Moore’s outcomes framework.

But here’s the real question:

When you write a learning objective, can you clearly identify the two to three specific clinical tasks that must happen for that objective to be achieved?

In this episode—based on a webinar I participated in with the Good CME Practice Group—we go deeper than frameworks. We unpack what actually sits underneath a learning objective and how that layer determines whether your CME changes practice… or simply delivers information.

What We Explore in This Episode

  1. Why learning objectives are signposts—not the design itself
  2. How to break each objective into 2–3 concrete clinical tasks
  3. The role of workflow, format, and audience context in determining granularity
  4. How learning science (cognitive load, retrieval practice, feedback) strengthens action-focused design
  5. Where CME programs most commonly lose alignment between need, content, assessment, and outcomes

Key Takeaway

If you can’t name the specific clinical actions required to meet an objective, the content won’t drive behavior change.

Design lives underneath the objective.

Next Step

If this episode resonated, try this:

Take one learning objective from a current project and ask:

  1. What are the two or three specific clinical actions underneath it?
  2. Where do those actions appear in the content?
  3. Where are they assessed?

That exercise alone will elevate your design work.

And if you want structured practice applying this level of thinking—with feedback, live coaching, and a community of CME professionals—explore WriteCME Pro.

This is where writers become design partners.

Resources

Good CME Practice Group

Mentioned in this episode:

AI Practice Lab

Build a Practical, Safe, Repeatable AI-assisted Workflow in Just 4 Weeks.March 9 - April 2Move beyond experimenting with AI. In this 4-week practice lab, work hands-on with Núria Negrão to build a documented, repeatable AI workflow for research, drafting, and quality control—one you can confidently explain to clients and teams.



This podcast uses the following third-party services for analysis:

Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp

Next Steps

➡️ Find out exactly where to focus next in your CME writing career. Take the free assessment

➡️ Join AI Practice Lab: A 4-week practice lab to work hands-on with Núria Negrão PhD. Build a documented, repeatable AI workflow for research, drafting, and quality control.

📰 Want more tips and tools on CME content strategy? Subscribe to Write Medicine Insider

🎙️ Know someone who would love this episode? Share the podcast

🎧 CME Writer Bootcamp: A complimentary private podcast on how to break into CME/CE writing 

🎧 Write Medicine Mentor – Get behind-the-scenes insights and templates to deepen your CME writing practice.

📣 Want to get your message to an engaged audience of medical writers and CME professionals? Advertise with us.

Chapters

00:00 - Untitled

01:20 - Introduction

01:30 -   Objectives as signposts

04:55 - Define concrete actions

11:56 - Format, length, scope

12:04 - Psychological safety

13:14 - Real world example

18:24 - Mapping Bloom's to Moore's isn't a straight line

20:17 - Learning science tools

25:28 - Where CME assessment breaks down

28:29 - Wrap up

Transcript
undefined:

What if the reason your CME content isn't driving behavior change isn't because your learning objectives are wrong, but because they're not specific enough. If you've ever written an objective using Bloom's Taxonomy, the. Mapped it to ME'S levels and still felt unsure how to translate that into concrete clinician action. This episode is for you. Today's conversation comes from a webinar I participated in with the good CME practice group where we dug into a deceptively. Simple question, what does it actually mean to begin with the end in mind in this hot seat style discussion, we unpack what sits underneath a learning objective, the two to three clinical tasks that determine whether your content supports learners, or simply delivers information. I'm Alex Hausen, and this right Medicine episode comes to you from the good CME Practice Group. Join us.

Speaker:

Hello, and welcome to today's good CE Practice Group webinar, focusing on designing CME for learner action. My name is Margarita Velva Manager, continuing education at Cannas Group, and today I'm joined by Rebecca Cox from Springer Health plus IME. Group and a Springer Health plus IME are members of the GT ME Practice Group. We're very pleased to have Alex Hausen write, TME Pro as a guest speaker today. Alex is an author of Write TME, roadmap, host of the Write Medicine podcast and founder of the Write ME Pro. Without further ado, we're going to kick off with our topic. I'll now give the floor to Alex to kick this off.

Speaker 2:

Thanks, margarita. Hello everybody. It's really nice to be here with you. Today we're gonna be focusing on with the que focusing on the question of, beginning with the end in mind. That's something that a phrase that has been part of our community for a very long time. Designing for learner action and how learning science can help us. I'm coming at these questions from the perspective of a writer and kind of getting into the weeds. On the content side of things. So when I think, first of all about designing for the end of in mind as a writer, of course, I'm thinking about outcomes, frameworks that give us the big picture outcomes. I'm thinking about MU's level framework. I'm thinking about self-efficacy and confidence, maybe the Kirkpatrick evaluation model that's used in some programs here in North America, despite the accent. I am based in the USA. The Kirkpatrick model is, similar to mers in that it measures reaction, learning behavior results, but it has four levels, not seven. But the point is I see these frameworks as a final. Destination and learning objectives are signposts on the road toward that destination. So when we're thinking about the end in mind, we're I, I'm starting at the top and then working backwards from the outcomes to the learning objectives, and then what? Lies underneath the learning objectives to support learners achieving those objectives. So there's another layer of specificity that I'm trying to be really mindful of because the content has to clearly support those learning objectives and lead the learner toward the final destination. Those outcomes that we're looking at. And so I'm trying to create. Concrete action steps that will lead the learners to the learning objective. When we're thinking about learning objectives, of course many of us are using Bloom's Taxonomy. It's used widely in this field. We're very familiar with that. Bloom's is primarily focused on cognitive learning levels. It doesn't always give us the frameworks of thinking through those concrete steps or actions that learners need to take in order to reach the learning objectives. I. Try to do a couple of things. One is, when I'm thinking about each learning objective, I want to identify two to three very specific tasks or actions or steps that the content has to help the learner take in order to meet the learning objective. There are some tools, out there like Roger Major's objectives, which are helpful in identifying those tasks. They're very highly, they're highly specific, measurable goals composed of three core components. One is performance. What will the learner do? To meet the learning objective. What conditions will they do whatever it is they're doing in, and what criteria needs to be present? How, do they need to do something within a specific timeline? Do they need to use a specific tool? Once I've identified those two to three concrete actions. Then I'm thinking about the audience. I want to know more about the audience. I want to know about the clinical workflow, what kind of setting or context we are. Creating the activity into, to address what kind of context does the audience work in? And of course, we're often working with multiple audiences that can be very challenging. I want to think about clinical reasoning. What are the clinical reasoning steps that the learners might be taking in order to meet? A particular learning objective. I'm thinking about competencies. I'm thinking about types of action, and of course there are lots of resources. We're not doing this in a vacuum. We have a solid needs assessment, especially if we have a very robust gap analysis and we have a root cause analysis that gives us lots of information to really start parsing out those two to three steps. That support each learning objective. It's also a great time to really ask good questions of your faculty and subject matter experts to really tease from them what the key teaching moments ought to be, and asking what's the one concrete action that would be a win for the learner, and help them achieve each learning objective. So when I think about, starting with the end in mind. I'm almost immediately working backwards and trying to drill down to how can, how specific can I get in terms of identifying those two to three steps that support each learning objective, knowing that the learning objectives are the signposts that kind of lead us toward those outcomes. So I'll stop at that point just to see if there are any questions.

Speaker 3:

Yeah. Thanks Alex. I fully agree it's so important to not just think of the one end behavior, but the steps to how to change that end behavior. Just a question from my side, also coming from. A writer's kind of perspective. You mentioned obviously actions will vary depending on the clinician role, the specialty, as well as the geography, even within the kind of same country that they're practicing within. How do you gauge the right level of granularity so that the actions that you are coming up with reflect not just one practice model, which are often. In global programs, for example, bias towards a kind of western healthcare system or like more specialized centers?

Speaker 2:

Yeah, that's a great question. So I think, one of the main things that determines that level of granularity? I think there are two main things. One is the format. What is the education format? Is it live? Is it online? Is it using cases? Is it using any kind of interactive polling? Is it being designed for mobile consumption? So these sort of things definitely have a bearing on, the level of granularity and, our, often when we start off developing the content, we're gonna have a lot more content that we need than we need for a particular activity. So one of the things that we're gonna be doing is pruning all that extraneous stuff, but making sure that it's still addresses the steps that the learners need to make. So format is a big thing. I think the other thing is, oh, it's gone right outta my head. It'll come back to me. Oh yeah. The length of the activity, the credits whether we're talking about a 15 minute, a 30 minute, a 60 minute activity, that's gonna have a very significant bearing on how granular we can actually get.

Speaker 3:

Perfect. Yeah, I fully agree. And I guess just a follow up question. When oftentimes the kind of end in mind is an, a linear journey for the healthcare professionals that we're trying to educate. So how do we balance the need for that clear route so that we can educate clearly and stepwise with a kind of realistic situation where often that route will be branching. There'll be uncertainty, there'll be like competing pressures. Yeah. How do you navigate that?

Speaker 2:

So if I'm understanding the question correctly you're asking, let me backtrack there. So I think there are different kinds of outcomes. For the education planners and designers, the outcomes are very much focused on what's the ultimate thing that we want learners to be able to do so that we can, demonstrate some kind of impact or change and the effectiveness of education and all of those things. And that's part of our communication with the wider field, with our supporters. And yeah. That ongoing process of demons demonstrating the effectiveness of education and the impact that education can make the outcomes from the learners, for the learners, I think is a little different. It's it's smaller bites. They're not really, they're not really, they don't really care whether we meet our outcomes. The outcome for them is, in a given activity, for instance, can they shift their mindset to more clearly be in line with recommended clinical practice? Does that start to answer the question or I, am I going off topic?

Speaker 3:

No, I think it does. I think it was, like you said, shift in their mindsets as opposed to focusing fully on just the outcome we're trying to achieve. Yeah, it was a bit to do with, branching formats as well as where there's not just one path that might be the correct path. But I think that's something you'll come onto later. So just in essence of time, there has been a question about the learning objective kind of formulation, but I think you're gonna go into that in the next section. I think. We can move on here.

Speaker 2:

Yeah, and just actually to answer your question quickly, and we might come back around to it, I think, yes. If we're talking about formats branching cases, branching logic, those sorts of things, or situations of clinical uncertainty where it's not always clear what the next step should be, then. One of the things that we can do to make that okay for the learner is to build psychological safety into the design of the activity or the program itself. In a live setting that might be getting input from the audience right at the very beginning. It might be things like, reflective questions to get them to really acknowledge that it's okay. There are. Areas of uncertainty in this particular, clinical topic. So I think there are some learning science tools that we can use there to support that uncertainty as part of the learning process and experience. But to get to your point about learning objectives yeah, so if I have a learning objective and an activity, so let me share an example here. If I have a learning objective, in an activity on atopic dermatitis that's the one I chose. Not my area at all, but let's say it says something like, assess patient candidacy for advanced biologic therapies targeting IL 13 in moderate to severe. Atopic dermatitis, that's a reasonable learning objective. I'm sure we can tweak that. We can refine it, but it's from a real program. I'm wanting to break that down into two, two to three actions that learners have to take in order to address that objective. What's the objective about the, this is a judgment. It's an objective that involves judgment as the behavior. Recognizing appropriate candidates. But other learning objectives involve other kinds of behaviors. Decisions deciding when to wither or when to select or to initiate therapy. Communication behaviors or actions handoffs to other team members. Application behaviors like ordering. Tests. So when we're thinking about those next steps those two to three actions to support the learning objective, we're also thinking about what kind of learning objective are we kinda working with here? What are the actions embedded in the, or implied in the learning objective? Are we talking about judgment? Are we talking about decisions? Are we talking, and so on. So the second thing. When we're in that designing for learner action piece is I wanna be thinking about context and you mentioned that earlier. Rebecca, in terms of geography. So again, your needs assessment, assuming it has, is robust, has a solid gap analysis, a root cause analysis, and is specific to the setting that the education is addressing. Then I'm thinking about which learners are we talking about and trying to do a deeper dive into who those audiences are, what their roles and responsibilities are in clinical care the types of settings that they're. They're working in the potential clinical workflow for those settings. 'cause they're all gonna have a bearing on how they're going to assess patients for this specific type of therapy. Therapy. And then the next thing I guess the third thing is, digging into those tasks. So say, in this case, the primary audience for this learning objective is dermatologists. In a community setting, I want to know two or three things that they need to be able to do in order to make that assessment, which remember is a judgment where learning objective. So these could be things like, they need to think about disease severity markers. The BSA score, patient reported outcomes. Often dermatologists will tell you they use a gestalt. That's a step that they take. They need to evaluate prior treatment responses, ask patient questions about the therapies that they've or look at their charts and their records to see what kind of therapies. Have been used to manage their atopic dermatitis in the past. A third thing that they might have to do is screen for contraindications or comorbidities that affect biologic selection. So in this broad learning objective about assessing patient candidacy for a specific therapy type, we've already got three steps that the learner has to take in order to meet. The learning objective. And there could be other ones that also involve judgment or be decisional, for instance, ordering tests or not escalating treatment. They could be communicative tasks like communicating with other team members and so on. When I'm, so the kinda second layer is when I start to get into that, parsing out the learning objective. I'm thinking about context, I'm thinking about tasks, I'm thinking about those specific things that the learner has to do to meet the. The learning objectives. And often there's I know I've talked about needs assessments. I've talked about talking to faculty, but there are other resources that support us at this particular point. Like the clinical competencies that medical specialty societies. Develop for their learners entrustable professional activities. These are all resources to help us parse out those specific tasks. And we can use them in consort with those other materials needs assessments subject matter experts to really get a better handle on specific contexts. I know often we're designing education. Where we're taking those contexts into consideration. But actually the audience ends up being much wider. And that's a different, that's a different challenge. So again, I'll stop there and see if any questions have come in.

Speaker:

Thank you Alex. So I think you explained very well the idea about designing for learner action. There was a question that Rebecca mentioned about learning objectives specifically very briefly. If we are in terms of Bloom's taxonomy or Moore levels ways to, to formula learning objectives, is there any preference or what would you.

Speaker 2:

MES tells you about the outcomes levels. Is the question about aligning Blooms more clearly with yeah. I don't think there's a straightforward path and that's because if you look at mes framework, we've got that you. And it's not the only framework, but of course it's the most it's a, it's commonly used. We've got that pyramid structure, we've got those different levels. Knowledge, competence, performance. Blooms doesn't neatly map onto mers because it's a cognitive, it's a cognitive level taxonomy and. Yes, we get to higher levels of cognition. But, I don't think there's a straightforward alignment. I, and that's where I think getting into the weeds about the specific tasks and steps that learners need to take, provide that additional support for. Whichever blooms learning objective you're using, that's the thread through to the outcomes. I think that's the thing that gives the learning objective more power and helps to reach the specific outcome that you're trying to reach. It's not the learning objective itself.

Speaker:

Thank you. We do have one more question but perhaps we continue further and then we can address that shortly.

Speaker 2:

Sure. And I think, we talked at the beginning of our session about, how does learning science help us with all of this? I think often learning science feels a little bit scary because there are so many theories. There are so many frameworks and philosophical approaches embedded within the learning science continuum. And when we think about learning sciences, we are talking about approaches that come from. Neuroscience and sociology and psychology and all sorts of different places. But I try to think of learning science as an enormous toolkit that has very specific tools for different purposes. I think the first thing is somebody asked a question earlier about, or did they, maybe I'm making this, maybe I'm making this up. We were talking about formats and we were talking about mobile options. The things that, that determine content the time of the activity. I can't remember the question now, but one of the things that we are always paying attention to, which comes from different parts of learning science, but mostly psychology, is cognitive load. And Rebecca, I know this is something that you'll be thinking about all the time. This is the principle we probably use most, even if we don't know that we're using it. Because every time we edit. Ruthlessly to get rid of that extraneous material to make sure the content is really addressing the learning objectives in the time and the format that we have available to us. We're chunking content. We're sequencing from simple to complex. We're pairing images with text to get that dual and coding support system firing up. We're using bullets and white space to manage. Cognitive load and we're using plain language principles. I'm thinking about read readability, sentence length descriptive headings so that the learner can very quickly see what the content is going to focus on and what It's get that rapid understanding of what the content is about. And I think that, optimizing all of this helps to make concrete the actions that the learner needs to take. That is supporting each learning objective, and it helps us make better decisions. About the con, the content we're actually including, because we get, I do think we get very tied up in, everything's got to be in here. We've gotta have, as much content as possible. But learning science tell us, tells us that less is more. And so that those cognitive load principles force you at each step to consider whether you are. Including content that supports the action step? Or is it content that is supplementary and could be presented in a different format as part of your learning support materials as part of a discussion thread on Slack or Telegram to. Include that social component of learning. So that's the first thing. Cognitive load is something I think we do even without thinking about it. The second is looking for those opportunities to apply other. Types of learning science support, for instance. When we think about learning, we're thinking about deliberate practice. We're thinking about feedback and course correction. We're thinking about opportunities for retrieval practice to support that recall and information application. And maybe we're doing that through cases that range from simple vignettes to more sophisticated branching logic type cases. Maybe we're doing that with little decision checkpoints in the content. Depending on the format, maybe we're building in some reflective questions, so we're thinking about those different ways too. Enhance the concrete steps by adding that layer of supportive learning science into the content design. To wrap this section up, the tools that we have in learning science, I think helped to align the concrete actions with the overall design with the learning objectives. Heading toward the outcomes they're supporting, those small steps, those actions. So judgment might include case progression. What do you do next? Type questions. Decision actions like choosing when or whether to initiate therapy. Work well with branching cases, with embedded multiple choice questions with feedback that help that. Course correction. Either in the format of the activity itself, if we're using an online simulation platform or, just in our heads communication heavy actions, work well with patient-centered cases reflective questions, scripts for communication. I, that's how I think about learning science or we don't need to. Choose a theory. There are lots of different tools within this vast toolbox that I think are mostly straightforward and app applicable to the work that we do every day.

Speaker 3:

Yeah, definitely agree. And I think just based on what you were saying, I think the relevance of the content is key. So getting to the most relevant piece of information for our audience members. Not, like you say, having all the background up front if it's not needed, or have that as a supplement. And that's certainly something that we we do within our education. I think you touched on it a bit. I'm aware of time, so I'll ask it in linked with a question that we've got from the audience in terms of bridging the content with the assessment and the outcomes you're trying to achieve and making sure they're all lined up. Where do you see CME providers or anyone break down most commonly? And to try and shoehorn the question in, is there any role that you see with new technologies such as AI to help us in, trying to, as writers keep a grip on all these moving parts and all the different things we need to think about when designing programs. Have you used any. Technologies that you can advise to help in that process?

Speaker 2:

I dunno about advice. I'll address the first question first. 'Cause that would be the thing to do, wouldn't it? What I do hear from outcomes teams sometimes is that when they get to the point of designing the pre and post atest test assessment questions, often what they have is the needs assessment. And the needs assessment doesn't always go deep enough in terms of. Those concrete steps, those specific actions that support each learning objective. And so they are grappling in the dark to try and figure out what are the threads that they need to pull through in order to, craft those questions. And I think that once we have those concrete steps, those are the threads that really lead to, oh yeah, these are the specific teaching points and touch points that we need to assess. Whatever type of assessment that we're. That we're using. So that's certainly a gap I hear about from outcomes teams. And in 30 seconds or less, can we use tools or technologies to pull out those touch points? I haven't done it. I imagine that there's a role for ai once you have the needs assessment you have those concrete steps, you're starting to parse out the content a little more to then use that to. To pull all those threads together, as you put it, there's lots of moving parts. But to pull them into sort of one core document that becomes the source of truth for everybody who's touching the content from the writer right through to the outcomes analyst.

Speaker 3:

Great. Yeah, no, I definitely agree. I think, like you say, we need to do our own due diligence at the beginning without AI putting in hallucinations anywhere. But I think yeah, like you say, having that kind of document that solidifies content without comes so you can identify any misalignment from the get go before you start delivering the program is key. Aware that we're at the half an hour now sorry if we didn't get to any of your questions. And I'll pass over to Margarita to wrap up.

Speaker:

Thank you, Rebecca. Thank you, Alex. Unfortunately, we are at the end of our session. Thank you to everyone who joined, and we look forward to seeing you at our next webinars. Thank you so much.

undefined:

If this episode clarified something for you, don't stop at the insight. Take one learning objective from a current project and ask yourself what are the two to three concrete clinical actions underneath it? That's where design begins. And if you want structured support, practicing this level of thinking with real projects, real feedback, and real alignment between objectives and outcomes, join us inside Write CME Pro. This is where writers become design partners. I'll see you there.