Lon Setnik, MD MHPE
Published: 2023-07-26
Updated: 2023-07-26
%%
#on/decisions | #moc/publish
Why do I want to make this MOC?
MOC Process:
> - _**Cluster**_ ideas/notes together.
> - Add lines between clusters (add the gaps).
> - _**Chart**_ the empty spaces between clusters (map the gaps).
> - _**Create**_ a new “thing” note.
> - _**Create**_ a new “statement” note.
> - _**Collide**_ your ideas.
> - _**Clarify**_ your note titles.
> - _**Connect**_ your notes.
> - _**Cut up**_ a note into two.
> - _**Combine**_ two notes into one.
> - _**Cast aside**_ notes that are no longer relevant.
> - _**Categorize**_ the notes in the MOC.
_Mapping Concepts_ includes mostly creating links to other ideas. It is a higher order of note making.
Creating Maps of concepts include finding the [[emergence]] and [[Convergence]].
This is an inherently [[Constructivism]] way of creating knowledge and storing ideas.
It mirrors the way our brain works. It is a way of organizing and [[dealing with complexity]]
[[Knowledge Value Making]] allows us to keep each note small, but to create meaning in and between each note.
%%
```toc
```
## What's the big idea here?
### Intuition happens, not necessarily well
- "Intuitive judgments can arise from genuine skill - the focus of the NDM approach - but they can also arise from the inappropriate application of the heuristic processes on which students of the HB tradition have focused." (Kahneman & Klein, 2009 p. 524)
### Decisions are not just ours
- We suspect only result of internal processes cause decisions, but the process is subject to many external influences, such as [[decision fatigue]], [[ego depletion]], and nudges and boosts.
- We think that we are active participants in the creation of our moment to moment decisions. We actually are largely impacted by the series of influencing factors such as our environment, the time we exist in (provides the opportunities we are exposed to) and the [[modern ancient mismatch]] that exists in our brains. In the absence of building our own [[Systems]], we will be overcome by those who seek to take advantage of our brains (advertisers, etc.).
### The top risk: We are fooling ourselves:
[[self assessment is not linked to performance]], we are blind to our actual performance in most of what we do.
![[Dunning-Kruger.jpg]]
We are ignorant of the edges of our expertise: So we not only can't tell when we are experts, we over-estimate our abilities during much of our learning and development.
We also tend to put more into our zone of expertise then we should, and we can [[drift into failure]]]] from a moment of expertise to a moment of ignorance and be blind to it.
![[Am I aware of my expertise 1.jpg]]
### Environmental conditions can influence our expertise development
- High-validity environments are necessary for the development of intuitive expertise (Kahneman & Klein, 2009).
- Even within domains the validity of the environment may differ.
- In Emergency Medicine: Abdominal pain is a pretty high-validity environment. We guess what the problem is, do fairly accurate imaging, and get highly reliable feedback.
- In Emergency Medicine: dizziness may be a low-validity environment. We often do stuff, and do or don't get an answer. We rarely get the end-result diagnosis.
- So, are we experts in Emergency Medicine? Or not? What's our expertise? Are we self-ware enough?
## A key question:
How to be more self-aware? Is it possible?
### Challenge:
_We are NOT [[Homo economicus]], making perfect decisions at each moment. In many ways we are just [[satisficers]], looking for **A** solution not the best solution, so we get to stop searching for the decision. This makes us vulnerable to [[Premature Closure]] and other errors._
## Two main models of deciding, and a question
_[[all models are wrong but some are useful]], and remember that [[we use metaphors to describe the brain]]. These represent [[Mental Model]]s about thinking, but if they are maps to our processes, [[The Map Is Not The Territory]].
We experience increased anxiety in the form of [[paradox of choice]] in the setting of abundance, which is just one of many ways we are let down by our biased brain. We would think that more choice is better, it actually is harder to make decisions with more choice.
### Heuristics and Biases
_Since our brains were developed in a time of scarcity, we often are in a [[Scarcity Mindset]]._ As a result, we try to save energy in decisions.
- We develop [[heuristics]] (mental shortcuts, Type I thinking).
- These shortcuts lead to reproducible decision errors known as [[cognitive biases]]
- An example is [[availability bias]]: if I saw a PE recently, I am more likely to search for a PE today in my low-risk patient.
- Another example is [[confirmation bias]], we actively seek information that supports our thin-sliced decision
- [[Probabilistic Thinking]] is another type of decision-making process, this is often called Type II
### Naturalistic Decision Making
- [[Naturalistic decision-making]] is the idea that experts with exposure to systems that provide effective feedback can make a choice, perform mental simulations around that choice, and choose as [[satisficers]] a solution that fits the moment.
- NDM is impacted by [[Bounded Rationality]], meaning what you can imagine is the moment based on your previous experiences, your cues, your energy available, the environment, etc. So NDM doesn't exist in an open world, so it falls victim to too narrow a set of possible solutions or even problems for the situation.
- For example, if you only have a moment to make a decision (pull up or bank left or right, missile incoming) you have less ability to pull in the necessary information, so you may not realize that you actually have a ceiling to go straight down, which could have been the best choice if you had more time.
## Can we tell? How can we tell if we can tell?
### What is the environment we have learned in?
- Key question is have we learned effectively? Is our environment full of opportunities for real feedback? Or, have we been fooled into thinking we are learning effectively by our mind finding patterns that don't exist? We might be using HB when we should be using NDM
## Some opportunities to do better!/?
### [[Design thinking]]
[[Thinking in Systems]] can help us avoid the different pitfalls of different decision making schema, and set up [[Mental Model]] that can help us with [[metacognition]] opportunities
Systems to help us avoid decision traps include
- [[a simple no]] -> Avoids over promising and over planning, to keep you in your most effective zone of production. [[Forage carefully]] to avoid picking mushrooms that might be good, to ensure you only pick mushrooms (say yes to things) that are _amazing_. [[Hell Yes or No]]!
- [[Design my life to live]] -> since you are in [[default mode]] frequently, this allows your default mode to get you the outcomes you want
- design for [[Antifragility]], try to create situations where the worst thing that happens is growth, this happens when you start with the goal of [[climb till you fall]] so you can [[turn wounds into wisdom]]
- look for opportunities that have maximum payoff, since [[Barbells make you stronger]]
- [[Identifying Reality]] -> work to identify reality so that you can [[Worry selectively]] about the right things
- Use the [[Eisenhower's Box]] to help choose what is really important. To decide what's important, go back to [[First Principles Thinking]], like focusing on [[trust is the currency of relationships]]
- [[Worry selectively]] on real problems that can be influenced, avoid worry about [[gravity or anchor problems]] (is this flight going to crash - Gravity Problem, not a challenge I can influence, so not a problem).
- If it's in the "Delete" quadrant, [[remove to gain]]
![[eisenhower-box.jpg]]
### Vision:
_Design situations, like groups, [[The Extended Mind]], and [[Building a Second Brain]], and [[Thinking as a group vs thinking as an individual]] to maintain [[epistemic humility]] and [[Situational Awareness]] to nudge me to get closer to identifying the real problem, deciding roles, actions, and solutions_
- Live in [[Flow]], where we have set up systems that allow us to take advantage of our brain the way it is, and our systems keep us functioning with output the way we want it.
- For me, [[⛏ Daily Work on the 🐓 Idea Farm]] is how I'm planning on getting to do what I want to do, by working with [[metacognition]] and developing a general [[reflective judgment]] to identify if I'm thinking well.
### A purposeful pause
- Take a pause at the right time and ask these questions of my intuition:
* **Is it safe to use this way of making decisions?**
* **Am I at risk here for [[black swan events]]? Have I taken the steps towards resilience in my decision making?
- When can we extrapolate my decision-making process to other situations? Have I developed the flexibility in [[Adaptive Expertise]], or am I making an inappropriate parallel to this new situation?
- A [[premortem]] can attempt to identify the risks up front, and create strategies to overcome them.
- Also try [[Inversion]], asking what the opposite would be - bust out the [[Socratic questioning]], play the devil's advocate for myself.
### A debriefing assessment
- [[reflective judgment]] is a helpful framework to use to assess our learners, what is their level of [[epistemic humility]]. Use [[curiosity]] to approach this moment.
## Sources:
Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to disagree. _American Psychologist_, _64_(6), 515–526. [https://doi.org/10.1037/a0016755](https://doi.org/10.1037/a0016755)
Klein, Gary. (2008). _Naturalistic Decision Making.pdf_. Human Factors. [http://iac.dtic.mil/hsiac](http://iac.dtic.mil/hsiac)
Todd, P. M., & Gigerenzer, G. (2001). Putting naturalistic decision making into the adaptive toolbox. _Journal of Behavioral Decision Making_, _14_(5), 381–383. [https://doi.org/10.1002/bdm.396](https://doi.org/10.1002/bdm.396)
Kahneman, D. (2013). _Thinking, fast and slow_ (1st pbk. ed). Farrar, Straus and Giroux.
Clarke, S. O., Ilgen, J. S., & Regehr, G. (2023). Fostering Adaptive Expertise Through Simulation. _Academic Medicine_, _Publish Ahead of Print_. [https://doi.org/10.1097/ACM.0000000000005257](https://doi.org/10.1097/ACM.0000000000005257)
Raufaste, E., Eyrolle, H., & Marine, C. (1998). Pertinence Generation in Radiological Diagnosis: Spreading Activation and the Nature of Expertise. _Cognitive Science_, _22_(4), 517–546.
Pusic, M. V., Cook, D. A., Friedman, J. L., Lorin, J. D., Rosenzweig, B. P., Tong, C. K. W., Smith, S., Lineberry, M., & Hatala, R. (2023). Modeling Diagnostic Expertise in Cases of Irreducible Uncertainty: The Decision-Aligned Response Model. _Academic Medicine_, _98_(1), 88–97. [https://doi.org/10.1097/ACM.0000000000004918](https://doi.org/10.1097/ACM.0000000000004918)
%%
## Unrequited notes (by link)
_update when creating MOC to point to this note_
These notes point directly to this note. But this note doesn't point back.
```dataview
table file.mtime.year + "-" + file.mtime.month + "-" + file.mtime.day as Modified
from [[Decision Making]]
and !outgoing([[Decision Making]])
sort file.mtime desc
```
## Associated notes (by tag)
_Update when creating MOC to point to this tag_
These notes have this associated tag: `#on/decisions`.
```dataview
table file.mtime.year + "-" + file.mtime.month + "-" + file.mtime.day as Modified
from #on/decisions
and !outgoing([[Decision Making]])
sort file.mtime desc
```