Improving Through Reflection

Peer Feedback & Self-Reflection for RMI Instructors

Reflection is one of the most reliable ways RMI instructors improve their teaching over time. By looking back on your methods, experiences, and outcomes, you can refine what you do well, address what is getting in the way of learning, and make purposeful changes that strengthen future sessions. The most useful reflection is both personal and social: You can learn from your own observations and grow more quickly as an instructor by comparing notes with others about what was clear, grounded, usable in practice.

This page is organized into two complementary sections:

  • Reflection With Others to Calibrate RMI Instruction: Using feedback from learners, peer instructors, and experienced practitioners to confirm relevance, clarity, and application, and to check pacing and density (what felt too dense, too fast, or too light).

  • Self-Reflection to Strengthen RMI Instruction Over Time: Using structured self-reflection tools to identify patterns in your instruction, make targeted revisions, and evaluate their impact over time.

Use the sections below to choose one or two reflection strategies that fit your style, time, and teaching context, then apply them consistently to strengthen your RMI instruction over time.

Reflection With Others to Calibrate RMI Instruction

RMI instruction improves fastest when instructors treat every course as a source of field evidence. What confused learners, what sparked strong discussion, where judgment differed, and which examples landed all point to how well the session supported real work. Reflecting with others helps you turn that evidence into better scenarios, clearer explanations, and stronger practice opportunities for the next group.

A simple way to do this is to reflect with three groups: learners, peer instructors, and experienced practitioners in the field. Explore each on the right.

  • Learner feedback is especially valuable in RMI because participants can tell you whether the instruction matched the realities of their roles. Beyond general satisfaction, look for input that helps you refine decision-focused teaching.

    Use feedback to identify:

    • Relevance: Did the cases and examples match learner job contexts (risk manager, broker, underwriter, adjuster, loss control)?

    • Clarity: Where did terminology, policy language, or frameworks feel unclear or inconsistent?

    • Application: Did learners get enough practice making a decision, stating assumptions, and defending a recommendation?

    • Pacing and cognitive load: Which parts felt too dense, too fast, or too light?

    To make feedback more actionable, include a few targeted questions such as:

    • “What is one concept you can use immediately at work?”

    • “Where did you feel least confident, and what would have helped?”

    • “Which scenario felt most realistic, and what was missing?”

  • RMI instructors often teach complex topics where accuracy and nuance matter. Reflecting with colleagues helps you pressure-test both technical content and instructional approach.

    Colleague reflection is most useful when you focus on:

    • Accuracy checks: Are definitions, examples, and interpretations consistent with accepted practice and current standards?

    • Consistency across instructors: Are you using shared terminology, templates, and scenario structures so learners get a coherent experience?

    • Decision quality: Did activities require learners to identify key facts, surface assumptions, weigh trade-offs, and justify choices?

    • Common misconceptions: What wrong answers keep appearing, and how should you address them next time?

    A simple approach is a short debrief after delivery:

    • what worked well

    • where learners struggled

    • what to change next time

    • what to preserve as a “standard” example or activity

    This kind of review builds a practical community of practice and reduces the load on any one instructor to “get it perfect” alone.

  • Industry leaders can help you validate whether your instruction reflects current practice and emerging issues. Their input is especially useful when:

    • the topic is changing (market conditions, claims trends, regulatory shifts, new exposures)

    • you want stronger realism in cases and scenarios

    • you need examples that reflect how decisions are made under constraints

    When you engage industry leaders, ask for feedback in RMI terms:

    • “What would you look for first in this scenario?”

    • “What is a common mistake you see in practice?”

    • “What documentation or communication is expected in a well-handled decision?”

    • “What would change your recommendation?”

    Use their insights to update scenarios, strengthen debrief points, and ensure learners practice the reasoning that experienced professionals actually use.

Self-Reflection to Strengthen RMI Instruction Over Time

When teaching RMI professionals, improvement rarely comes from wholesale reinvention. More often, it comes from disciplined reflection on what happened in the room, followed by a few targeted changes you can carry into the next delivery. The tools on this page are designed to help instructors identify meaningful learner behaviors from their sessions, make targeted revisions to instruction, and evaluate whether those revisions improved workplace performance.

This page outlines four practical reflection tools RMI instructors can use to sharpen instruction, including:

  • Goal-Setting Session: Translating what you observed in a course into one or two measurable improvements you can implement and track across deliveries.

  • Meditative Reflection: Using a short post-session reset to notice patterns you might miss in the moment and identify a small, high-value adjustment for next time.

  • Reflective Journaling: Capturing observations and learner reasoning in a consistent format so you can spot trends, refine scenarios, and improve transfer over time.

  • SWOT Analysis: Stepping back to identify strengths to repeat, gaps to address, external opportunities to leverage, and risks to mitigate before the next session.

These tools frame reflection as a professional habit, not an academic exercise. You do not need to use all of them. Choose one method, or combine two, that fit your time and context so you can make improvements that are practical, repeatable, and grounded in the realities of RMI work.

Goal-Setting Session

Goal setting is a practical reflective practice for RMI instructors who want to improve instruction over time. By setting specific, measurable goals, you can evaluate what happened in a course, identify what to adjust, and prioritize development that improves learner performance on the job.

To keep goals meaningful, tie them to observable outcomes from your last delivery, such as where learners hesitated, what they misapplied, and which activities produced the strongest reasoning. From there, choose one or two improvements you can implement and measure in the next session.

Use the steps on the right to set goals that are concrete, job-relevant, and easy to track across course deliveries.

Goal-Setting Form

Goal-Setting Session Steps

  • Step back from the last session you taught and reflect on the “through-line” of how you teach RMI. The goal here is to name your default approach, what learners consistently get from you, and what patterns show up repeatedly.

    Consider questions like:

    • What role do you naturally take? Technical translator, case facilitator, coach, standards enforcer, storyteller, discussion moderator, problem-solver.

    • What do learners consistently praise or struggle with in your sessions? (clarity, relevance, pacing, realism, confidence in application)

    • Where do you reliably add value? Making complex concepts practical, surfacing assumptions, connecting content to job roles, keeping discussions grounded.

    • What patterns repeat across sessions? Running short on debrief time, strong content but limited practice, great discussion but uneven takeaways, noticing learner misconceptions but not pausing to correct them before moving on.

    Practical Tip: Skim 2–3 sources of “trail data” from multiple sessions (common questions, chat logs, quiz misses, end-of-class comments). You’re looking for recurring patterns, not one-off moments.

  • Now zoom in and assess the instructor skills that turn technical RMI knowledge into clear explanations, realistic practice, and defensible decision-making. Treat this as a quick diagnostic, not a performance review. Use evidence where possible (quiz results, observation, colleague notes).

    A practical way to do this is a simple “traffic-light” check (Green = reliable strength, Yellow = inconsistent, Red = needs attention) across the areas below:

    • Technical clarity and precision

      Can I explain key terms and distinctions in plain language without losing accuracy?

      Evidence: fewer repeated “definition” questions; learners can restate concepts correctly.

    • Scenario design and realism

      Do my scenarios resemble how information appears in real work, including constraints and ambiguity?

      Evidence: learners engage quickly, ask better questions, and don’t dismiss scenarios as unrealistic.

    • Facilitating professional judgment and disagreement

      When learners disagree, can I surface assumptions, variables, and decision criteria instead of “declaring the answer”?

      Evidence: discussion produces clearer reasoning; learners can name what would change the decision.

    • Practice and feedback that improves reasoning

      Do learners get enough chances to decide, justify, and revise thinking based on feedback?

      Evidence: justifications improve over the session; fewer repeated misconceptions.

    • Pacing and cognitive load management

      Do I protect time for application and debrief, not just coverage?

      Evidence: you consistently reach practice activities; learners are not lost by mid-session.

    • Transfer to job performance

      Do learners leave with something they can use on Monday?

      Evidence: learners can articulate a next step, a checklist, a decision rule, or a communication template.

    To keep it actionable, choose one strength to leverage (something you will intentionally use more) and one growth area to target (something you will measure and improve next delivery).

  • Using your instructor profile (Step 1) and your strengths and growth areas (Step 2), set one or two goals that are clear and realistic for the next delivery. Strong RMI-instructor goals name the learner behavior you want to see, the conditions under which learners will demonstrate it (usually a short scenario), and the evidence you will use to measure improvement.

    A practical way to write the goal is: “In a realistic situation, learners will do x. We will know it worked because y.”

    For example, instead of “improve engagement,” use a performance-based goal such as:

    “By the end of the session, learners will identify the top three missing facts and select a defensible next step in a short risk or claims scenario. Improvement will be measured by a rubric-scored scenario check, with at least 80% of learners meeting the standard.”

    To keep goals manageable, break larger goals into small milestones you can implement and track across instructional sessions and limit yourself to one content-related change and one delivery-related change per cycle.

  • Check that your goals support the outcomes your course is intended to produce and that they fit the learners and context you identified in Step 1.

    In RMI instruction, the most useful outcomes reflect job performance, such as interpreting a fact pattern, prioritizing exposures, selecting an appropriate action under constraints, documenting assumptions, and communicating a defensible recommendation.

    To confirm that your personal instructional goals align with your learning outcomes, consider these three things:

    • Outcome fit: Does the goal strengthen the specific RMI capability the session targets (coverage interpretation, claims decision-making, risk evaluation, control selection, documentation and communication)?

    • Role and realism: Is the goal appropriate for the roles and experience levels in the room, and does it allow for role-appropriate decisions when justified?

    • Priority and impact: Will improving this area reduce a recurring misconception or meaningfully improve learner performance in real work?

    This alignment helps you focus effort on changes that matter most and avoid optimizing for activity or engagement that does not improve application.

  • Turn each goal into a short plan you can execute before the next delivery. Keep the plan realistic for your workload and targeted to the specific learner behavior and measure you chose in Step 3.

    For each goal, identify:

    • What you will change: The one or two instructor moves or material updates that will drive the learner behavior (for example, a revised scenario, a new decision prompt, a tighter debrief, a rubric, or a short knowledge check).

    • Where it will show up: Slides, scenarios, discussion prompts, job aids, quizzes, or facilitator notes.

    • How you will measure it: The exact checkpoint you will use (scenario item, exit ticket, rubric-scored response, poll sequence), including what “meets the standard” looks like.

    • What you need: further research for technical accuracy, updated examples, templates, tools, or time from a co-instructor.

    • When it will be ready: A deadline tied to your next instructional session, plus a quick “final check” for accuracy, currency, and confidentiality if you are using real-world examples.

    Again, as mentioned in Step 3, a good rule is one content-related adjustment and one delivery-related adjustment per cycle, so improvement stays consistent and sustainable.

  • After your next instructional session, use the measure you defined in Step 3 to check whether learners demonstrated the targeted job-relevant behavior more consistently. Review the evidence from your scenario check, rubric-scored response, poll results, or exit ticket, and compare it to your baseline or prior delivery.

    As you review results, look for:

    • Performance: Did more learners meet the standard you set?

    • Reasoning quality: Were assumptions clearer, trade-offs better identified, and next steps more defensible?

    • Misconceptions: Did the specific recurring error show up less often, or in a weaker form?

    Then decide what to do next:

    • Keep the change if it improved learner performance.

    • Revise it if results were mixed (adjust the prompt, scenario facts, debrief, or rubric).

    • Replace it if it did not work, and choose a different lever for the same outcome.

    Close the cycle by documenting what you changed and what you learned, so you can reuse what worked and continue building a consistent instructional approach over time.

Meditative Reflection

RMI instruction can be cognitively demanding and socially high-stakes. You are often teaching peers, covering technical material, and facilitating discussions where professionals may disagree. Meditative reflection is a simple way to reset after delivery and to notice what you might otherwise miss: where you felt rushed, where learner uncertainty showed up, which moments created clarity, and what you want to do differently next time. The goal is not to “become a meditator.” The goal is to create enough calm and distance to evaluate your instruction with honesty and care.

Meditative Reflection Checklist

Meditative Reflection Steps

  • Choose a quiet place and a realistic time, such as 5 to 10 minutes after a session or at the end of the day. Consistency matters more than duration. Keep a notebook or notes app nearby so you can capture insights immediately.

  • Sit comfortably, breathe slowly, and let your attention land. If it helps, use a simple count (inhale for four, exhale for six) for a minute or two. Your aim is to reduce mental noise so you can reflect clearly.

  • As thoughts come up about the session, treat them as information rather than a verdict. In RMI teaching, it is easy to fixate on a tough question, a missed point, or a moment of disagreement. Instead, notice what happened and what it might mean for the next delivery.

  • Once you feel steady, revisit a few specific points in the session. Focus on the moments that matter most in professional learning:

    • Where did learners struggle to move from concept to application?

    • When did you see strong reasoning, and what prompted it?

    • Where did learners disagree, and did you surface the assumptions behind each view?

    • Which examples felt realistic, and which felt too abstract or too narrow?

    • Did you model “defensible decision-making,” including stating assumptions and naming what information would change the decision?

    Let patterns emerge. You are looking for one or two high-value adjustments, not a long list of faults.

  • Turn your reflection into a small, job-relevant commitment. Examples:

    • “Add one scenario-based checkpoint before we move from definitions to coverage interpretation.”

    • “When disagreement shows up, ask, ‘What assumption are you making?’ before I resolve it.”

    • “Replace one abstract example with a short fact pattern that mirrors how a claim or risk review actually unfolds.”

    • “Tighten terminology by using the same wording across slides, discussion prompts, and quiz items.”

    These intentions keep reflection from becoming vague and help you improve the parts of instruction that drive learner performance.

  • End by returning to your breath for a few cycles and acknowledging one thing that went well. Then jot down your intention in one sentence. This small close-out step helps you carry improvement forward without ruminating, and it reinforces that teaching is a professional skill you can refine over time.

Reflective Journaling

Reflective journaling is a practical way for RMI instructors to improve instruction one delivery at a time. In peer-facing, technical sessions, small issues can quickly reduce clarity and application. Journaling gives you a private place to capture what happened, why it mattered, and what you want to adjust next time. Over time, your entries become a working “instructor playbook” of scenarios that land, misconceptions to anticipate, and facilitation moves that improve decision quality.

Use the five steps on the right to capture the most important moments from a session and turn them into one or two concrete improvements you can carry into your next instructional opportunity.

Reflective Journaling Form

Reflective Journaling Steps

  • Keep it simple so it actually happens. Block 10 minutes after each session, or 15 minutes at the end of the day. Use the same format each time and keep your notes in one place so you can review trends before the next delivery.

  • Start with the facts of the delivery before interpreting them.

    • Who was in the room (roles, experience levels, lines of business)?

    • What were the key concepts and decision points the session targeted?

    • Which activities were used (case discussion, quiz, poll, walkthrough, group work)?

    • Where did time run long or feel rushed?

    This anchors reflection in what RMI instruction is trying to accomplish: helping learners apply skills and concepts on the job, and then build toward deeper analysis and judgment, not just cover content.

  • Write freely, but focus on evidence you can use later.

    • What questions did learners ask that revealed uncertainty or misconceptions?

    • Where did learners disagree, and what assumptions seemed to drive each view?

    • Which examples felt realistic to the group, and which did not connect and why?

    • What did learners do well when applying the content?

    If a moment stood out, record the exact prompt you used and what learners said in response. Those details are especially useful when you revise instruction.

  • Look for recurring learner behaviors that show whether concepts are transferring into practice as intended. These patterns help you pinpoint what to reinforce, what to clarify, and what to redesign.

    Common RMI patterns to watch for:

    • Learners recognize terms but struggle to apply them to a fact pattern.

    • Learners move to an answer without identifying missing information.

    • Learners confuse similar concepts, coverages, or policy provisions.

    • Learners fixate on one variable and miss other important constraints (cost, feasibility, authority, compliance).

    Summarize the pattern in one sentence and note what might correct it, such as a stronger example, a clearer definition, a revised prompt, or a short checklist learners can use during practice.

  • End each entry with three lines that translate reflection into action:

    • Keep: What worked well and should become a standard part of the course (a scenario, explanation, visual, debrief question).

    • Change: What needs revision (terminology, pacing, a confusing slide, an example that misled learners).

    • Add: One new element to improve application (a missing decision point, a short knowledge check, a debrief that surfaces assumptions).

    Finish with one concrete next step you can complete before the next delivery.

SWOT Analysis

A SWOT analysis is a simple way for RMI instructors to step back and evaluate what is helping or hurting learning outcomes. Because RMI instruction often involves technical content, professional judgment, and learners with diverse roles, a quick SWOT can help you spot what to keep, what to tighten, and what risks might undermine credibility or application. The goal is not a perfect analysis of your instruction or performance. The goal is a clearer plan for improving how well learners can apply concepts to realistic situations.

Use the six steps on the right to conduct a focused SWOT analysis of your instruction and translate the results into practical improvements for your next delivery.

SWOT Analysis Form

SWOT Analysis Steps

  • Start by naming a specific part of your instruction you want to evaluate with a SWOT analysis. Choose one focus area and tie it to a learner performance need.

    Examples of SWOT focus areas in RMI instruction:

    • “Learners’ ability to apply coverage concepts to short fact patterns.”

    • “How I facilitate discussion when professionals disagree.”

    • “Recurring misconceptions in claims handling scenarios.”

    • “Realism and role-fit of scenarios for mixed-role audiences.”

    Then note the context for the SWOT: instructional format (e.g., live, virtual, blended), audience mix (e.g., risk managers, brokers, underwriters, adjusters), and which course components you are reviewing (e.g., materials, cases, assessments, facilitation).

  • List what is working and why it works, with evidence when possible (quiz results, comments, questions, observation).

    Common RMI instructional strengths include:

    • strong subject-matter credibility and clear explanations of key terms (evidence: fewer repeat definition questions; learners use terms correctly in discussion or on a short check; feedback mentions clarity/credibility)

    • realistic scenarios that mirror how information arrives in practice (evidence: learners engage quickly; questions focus on decision drivers and missing facts rather than dismissing the scenario; discussion reflects real constraints and role perspectives)

    • effective debriefs that model a defensible recommendation and the assumptions behind it (evidence: learners can restate the rationale; learners name assumptions and what would change the decision; debrief points show up in later activities or responses)

    • well-timed knowledge checks that prevent learners from falling behind (evidence: fewer confused interruptions later; stronger performance on the next application task; fewer repeated misconceptions after the check and correction)

    • facilitation techniques that surface trade-offs and constraints, like cost, feasibility, authority, and compliance (evidence: learners reference constraints when justifying choices; disagreement becomes more structured; multiple defensible answers emerge with stated assumptions)

    Write each strength as an asset you can reuse or standardize.

  • Be candid and focus on fixable issues that limit learners’ ability to apply concepts in realistic situations, because application is the foundation for the deeper work RMI requires: analyzing facts, weighing trade-offs, and justifying a defensible course of action.

    Common RMI instructional weaknesses include:

    • too much content coverage and not enough practice or debrief time (evidence: you consistently run out of time for scenarios or debriefs; learners ask “so what do we do?” near the end; end-of-session feedback requests more examples/practice)

    • examples that are accurate but too abstract to transfer to work decisions (evidence: learners struggle to answer “what would you do next?”; questions drift to “give us a real example”; learners disengage or respond with generic answers)

    • inconsistent terminology across slides, discussion, and assessments (evidence: learners ask for repeated clarification of the same term; learners use different terms for the same concept; quiz misses cluster around wording rather than the underlying concept)

    • quiz items that test recall but not application or reasoning (evidence: high quiz scores but weak performance in scenarios; learners can define terms but cannot apply them to a fact pattern; debrief reveals shallow rationales)

    • difficulty handling disagreement, which can cause learners to disengage or “wait for the answer” (evidence: the same few voices dominate; others go quiet after conflict; discussion circles without surfacing assumptions; learners ask for “the right answer” instead of stating decision drivers)

    Write each weakness as a specific, fixable gap you can address in the next delivery.

  • Opportunities are external inputs you can use to make instruction more relevant, current, and easier to apply.

    Common RMI instructional opportunities include:

    • emerging issues learners are actively facing, like new exposures, market shifts, or claims trends (how to use: refresh scenarios, examples, and debrief points with current issues learners recognize; expected impact: higher relevance and smoother transfer to real work)

    • updated internal guidance, standards, or best-practice frameworks that can be turned into job aids (how to use: convert updates into checklists, decision prompts, or reference sheets learners can use during scenarios; expected impact: improved consistency and fewer gaps between training and practice)

    • new tools that support instruction, such as polling, scenario branching, or collaborative documents (how to use: add quick checkpoints and structured participation to diagnose understanding in real time; expected impact: more visible engagement and fewer learners falling behind)

    • access to SMEs who can pressure-test scenarios for realism and current practice (how to use: have SMEs review scenarios, debrief answers, and assumptions for accuracy and realism; expected impact: stronger credibility and fewer “that’s not how it works” objections)

    • partnerships or guest perspectives that add credibility and fresh examples (how to use: bring in a short guest segment or curated examples tied to the course decision points; expected impact: richer context for mixed-role audiences and more current, role-specific takeaways)

    Choose opportunities that improve relevance without increasing complexity beyond what you can support. Write each opportunity as a specific enhancement you can pilot in the next delivery.

  • Threats are external factors that can reduce effectiveness even if your content is strong. This step often feels familiar to RMI professionals, since it mirrors the same habit of scanning for external risks and planning mitigations.

    Common RMI instructional threats include:

    • rapid change in regulations, products, or market conditions that can date examples quickly (how to respond: add a quick “currency check” before delivery and update examples, terminology, or references as needed; expected impact: fewer outdated takeaways and stronger credibility)

    • mixed-role audiences where examples fit one segment but alienate another (how to respond: build role-based variations or discussion prompts that let learners answer from their role; expected impact: higher relevance across the room and fewer “this doesn’t apply to me” reactions)

    • confidentiality constraints that limit use of real cases, making scenarios feel generic (how to respond: use realistic composites and include the details that drive decisions without using identifiable information; expected impact: stronger realism without violating confidentiality)

    • competing training providers or internal content that conflicts with your approach (how to respond: align on terminology and decision frameworks, and acknowledge differences explicitly when they matter; expected impact: reduced learner confusion and more consistent application)

    • technology failures or platform limitations that disrupt interaction and pacing (how to respond: prepare a low-tech backup for activities and checkpoints; expected impact: steadier facilitation and fewer lost learning moments).

    For each threat, note one mitigation you can apply before the next delivery.

  • Ensure the SWOT ends with decisions, not just observations. Close by selecting a few concrete actions to implement next time:

    • Keep: 1–2 strengths to standardize and repeat

    • Fix: 1–2 weaknesses to address before the next delivery

    • Try: 1 opportunity to pilot in the next session

    • Protect against: 1 threat with a clear mitigation step