rustc_mir_build/builder/
scope.rs

1/*!
2Managing the scope stack. The scopes are tied to lexical scopes, so as
3we descend the THIR, we push a scope on the stack, build its
4contents, and then pop it off. Every scope is named by a
5`region::Scope`.
6
7### SEME Regions
8
9When pushing a new [Scope], we record the current point in the graph (a
10basic block); this marks the entry to the scope. We then generate more
11stuff in the control-flow graph. Whenever the scope is exited, either
12via a `break` or `return` or just by fallthrough, that marks an exit
13from the scope. Each lexical scope thus corresponds to a single-entry,
14multiple-exit (SEME) region in the control-flow graph.
15
16For now, we record the `region::Scope` to each SEME region for later reference
17(see caveat in next paragraph). This is because destruction scopes are tied to
18them. This may change in the future so that MIR lowering determines its own
19destruction scopes.
20
21### Not so SEME Regions
22
23In the course of building matches, it sometimes happens that certain code
24(namely guards) gets executed multiple times. This means that the scope lexical
25scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
26mapping is from one scope to a vector of SEME regions. Since the SEME regions
27are disjoint, the mapping is still one-to-one for the set of SEME regions that
28we're currently in.
29
30Also in matches, the scopes assigned to arms are not always even SEME regions!
31Each arm has a single region with one entry for each pattern. We manually
32manipulate the scheduled drops in this scope to avoid dropping things multiple
33times.
34
35### Drops
36
37The primary purpose for scopes is to insert drops: while building
38the contents, we also accumulate places that need to be dropped upon
39exit from each scope. This is done by calling `schedule_drop`. Once a
40drop is scheduled, whenever we branch out we will insert drops of all
41those places onto the outgoing edge. Note that we don't know the full
42set of scheduled drops up front, and so whenever we exit from the
43scope we only drop the values scheduled thus far. For example, consider
44the scope S corresponding to this loop:
45
46```
47# let cond = true;
48loop {
49    let x = ..;
50    if cond { break; }
51    let y = ..;
52}
53```
54
55When processing the `let x`, we will add one drop to the scope for
56`x`. The break will then insert a drop for `x`. When we process `let
57y`, we will add another drop (in fact, to a subscope, but let's ignore
58that for now); any later drops would also drop `y`.
59
60### Early exit
61
62There are numerous "normal" ways to early exit a scope: `break`,
63`continue`, `return` (panics are handled separately). Whenever an
64early exit occurs, the method `break_scope` is called. It is given the
65current point in execution where the early exit occurs, as well as the
66scope you want to branch to (note that all early exits from to some
67other enclosing scope). `break_scope` will record the set of drops currently
68scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops
69will be added to the CFG.
70
71Panics are handled in a similar fashion, except that the drops are added to the
72MIR once the rest of the function has finished being lowered. If a terminator
73can panic, call `diverge_from(block)` with the block containing the terminator
74`block`.
75
76### Breakable scopes
77
78In addition to the normal scope stack, we track a loop scope stack
79that contains only loops and breakable blocks. It tracks where a `break`,
80`continue` or `return` should go to.
81
82*/
83
84use std::mem;
85
86use interpret::ErrorHandled;
87use rustc_data_structures::fx::FxHashMap;
88use rustc_hir::HirId;
89use rustc_index::{IndexSlice, IndexVec};
90use rustc_middle::middle::region;
91use rustc_middle::mir::{self, *};
92use rustc_middle::thir::{AdtExpr, AdtExprBase, ArmId, ExprId, ExprKind, LintLevel};
93use rustc_middle::ty::{self, Ty, TyCtxt, TypeVisitableExt, ValTree};
94use rustc_middle::{bug, span_bug};
95use rustc_pattern_analysis::rustc::RustcPatCtxt;
96use rustc_session::lint::Level;
97use rustc_span::source_map::Spanned;
98use rustc_span::{DUMMY_SP, Span};
99use tracing::{debug, instrument};
100
101use super::matches::BuiltMatchTree;
102use crate::builder::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
103use crate::errors::{ConstContinueBadConst, ConstContinueUnknownJumpTarget};
104
105#[derive(Debug)]
106pub(crate) struct Scopes<'tcx> {
107    scopes: Vec<Scope>,
108
109    /// The current set of breakable scopes. See module comment for more details.
110    breakable_scopes: Vec<BreakableScope<'tcx>>,
111
112    const_continuable_scopes: Vec<ConstContinuableScope<'tcx>>,
113
114    /// The scope of the innermost if-then currently being lowered.
115    if_then_scope: Option<IfThenScope>,
116
117    /// Drops that need to be done on unwind paths. See the comment on
118    /// [DropTree] for more details.
119    unwind_drops: DropTree,
120
121    /// Drops that need to be done on paths to the `CoroutineDrop` terminator.
122    coroutine_drops: DropTree,
123}
124
125#[derive(Debug)]
126struct Scope {
127    /// The source scope this scope was created in.
128    source_scope: SourceScope,
129
130    /// the region span of this scope within source code.
131    region_scope: region::Scope,
132
133    /// set of places to drop when exiting this scope. This starts
134    /// out empty but grows as variables are declared during the
135    /// building process. This is a stack, so we always drop from the
136    /// end of the vector (top of the stack) first.
137    drops: Vec<DropData>,
138
139    moved_locals: Vec<Local>,
140
141    /// The drop index that will drop everything in and below this scope on an
142    /// unwind path.
143    cached_unwind_block: Option<DropIdx>,
144
145    /// The drop index that will drop everything in and below this scope on a
146    /// coroutine drop path.
147    cached_coroutine_drop_block: Option<DropIdx>,
148}
149
150#[derive(Clone, Copy, Debug)]
151struct DropData {
152    /// The `Span` where drop obligation was incurred (typically where place was
153    /// declared)
154    source_info: SourceInfo,
155
156    /// local to drop
157    local: Local,
158
159    /// Whether this is a value Drop or a StorageDead.
160    kind: DropKind,
161}
162
163#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
164pub(crate) enum DropKind {
165    Value,
166    Storage,
167    ForLint,
168}
169
170#[derive(Debug)]
171struct BreakableScope<'tcx> {
172    /// Region scope of the loop
173    region_scope: region::Scope,
174    /// The destination of the loop/block expression itself (i.e., where to put
175    /// the result of a `break` or `return` expression)
176    break_destination: Place<'tcx>,
177    /// Drops that happen on the `break`/`return` path.
178    break_drops: DropTree,
179    /// Drops that happen on the `continue` path.
180    continue_drops: Option<DropTree>,
181}
182
183#[derive(Debug)]
184struct ConstContinuableScope<'tcx> {
185    /// The scope for the `#[loop_match]` which its `#[const_continue]`s will jump to.
186    region_scope: region::Scope,
187    /// The place of the state of a `#[loop_match]`, which a `#[const_continue]` must update.
188    state_place: Place<'tcx>,
189
190    arms: Box<[ArmId]>,
191    built_match_tree: BuiltMatchTree<'tcx>,
192
193    /// Drops that happen on a `#[const_continue]`
194    const_continue_drops: DropTree,
195}
196
197#[derive(Debug)]
198struct IfThenScope {
199    /// The if-then scope or arm scope
200    region_scope: region::Scope,
201    /// Drops that happen on the `else` path.
202    else_drops: DropTree,
203}
204
205/// The target of an expression that breaks out of a scope
206#[derive(Clone, Copy, Debug)]
207pub(crate) enum BreakableTarget {
208    Continue(region::Scope),
209    Break(region::Scope),
210    Return,
211}
212
213rustc_index::newtype_index! {
214    #[orderable]
215    struct DropIdx {}
216}
217
218const ROOT_NODE: DropIdx = DropIdx::ZERO;
219
220/// A tree of drops that we have deferred lowering. It's used for:
221///
222/// * Drops on unwind paths
223/// * Drops on coroutine drop paths (when a suspended coroutine is dropped)
224/// * Drops on return and loop exit paths
225/// * Drops on the else path in an `if let` chain
226///
227/// Once no more nodes could be added to the tree, we lower it to MIR in one go
228/// in `build_mir`.
229#[derive(Debug)]
230struct DropTree {
231    /// Nodes in the drop tree, containing drop data and a link to the next node.
232    drop_nodes: IndexVec<DropIdx, DropNode>,
233    /// Map for finding the index of an existing node, given its contents.
234    existing_drops_map: FxHashMap<DropNodeKey, DropIdx>,
235    /// Edges into the `DropTree` that need to be added once it's lowered.
236    entry_points: Vec<(DropIdx, BasicBlock)>,
237}
238
239/// A single node in the drop tree.
240#[derive(Debug)]
241struct DropNode {
242    /// Info about the drop to be performed at this node in the drop tree.
243    data: DropData,
244    /// Index of the "next" drop to perform (in drop order, not declaration order).
245    next: DropIdx,
246}
247
248/// Subset of [`DropNode`] used for reverse lookup in a hash table.
249#[derive(Debug, PartialEq, Eq, Hash)]
250struct DropNodeKey {
251    next: DropIdx,
252    local: Local,
253}
254
255impl Scope {
256    /// Whether there's anything to do for the cleanup path, that is,
257    /// when unwinding through this scope. This includes destructors,
258    /// but not StorageDead statements, which don't get emitted at all
259    /// for unwinding, for several reasons:
260    ///  * clang doesn't emit llvm.lifetime.end for C++ unwinding
261    ///  * LLVM's memory dependency analysis can't handle it atm
262    ///  * polluting the cleanup MIR with StorageDead creates
263    ///    landing pads even though there's no actual destructors
264    ///  * freeing up stack space has no effect during unwinding
265    /// Note that for coroutines we do emit StorageDeads, for the
266    /// use of optimizations in the MIR coroutine transform.
267    fn needs_cleanup(&self) -> bool {
268        self.drops.iter().any(|drop| match drop.kind {
269            DropKind::Value | DropKind::ForLint => true,
270            DropKind::Storage => false,
271        })
272    }
273
274    fn invalidate_cache(&mut self) {
275        self.cached_unwind_block = None;
276        self.cached_coroutine_drop_block = None;
277    }
278}
279
280/// A trait that determined how [DropTree] creates its blocks and
281/// links to any entry nodes.
282trait DropTreeBuilder<'tcx> {
283    /// Create a new block for the tree. This should call either
284    /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`.
285    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock;
286
287    /// Links a block outside the drop tree, `from`, to the block `to` inside
288    /// the drop tree.
289    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock);
290}
291
292impl DropTree {
293    fn new() -> Self {
294        // The root node of the tree doesn't represent a drop, but instead
295        // represents the block in the tree that should be jumped to once all
296        // of the required drops have been performed.
297        let fake_source_info = SourceInfo::outermost(DUMMY_SP);
298        let fake_data =
299            DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage };
300        let drop_nodes = IndexVec::from_raw(vec![DropNode { data: fake_data, next: DropIdx::MAX }]);
301        Self { drop_nodes, entry_points: Vec::new(), existing_drops_map: FxHashMap::default() }
302    }
303
304    /// Adds a node to the drop tree, consisting of drop data and the index of
305    /// the "next" drop (in drop order), which could be the sentinel [`ROOT_NODE`].
306    ///
307    /// If there is already an equivalent node in the tree, nothing is added, and
308    /// that node's index is returned. Otherwise, the new node's index is returned.
309    fn add_drop(&mut self, data: DropData, next: DropIdx) -> DropIdx {
310        let drop_nodes = &mut self.drop_nodes;
311        *self
312            .existing_drops_map
313            .entry(DropNodeKey { next, local: data.local })
314            // Create a new node, and also add its index to the map.
315            .or_insert_with(|| drop_nodes.push(DropNode { data, next }))
316    }
317
318    /// Registers `from` as an entry point to this drop tree, at `to`.
319    ///
320    /// During [`Self::build_mir`], `from` will be linked to the corresponding
321    /// block within the drop tree.
322    fn add_entry_point(&mut self, from: BasicBlock, to: DropIdx) {
323        debug_assert!(to < self.drop_nodes.next_index());
324        self.entry_points.push((to, from));
325    }
326
327    /// Builds the MIR for a given drop tree.
328    fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>(
329        &mut self,
330        cfg: &mut CFG<'tcx>,
331        root_node: Option<BasicBlock>,
332    ) -> IndexVec<DropIdx, Option<BasicBlock>> {
333        debug!("DropTree::build_mir(drops = {:#?})", self);
334
335        let mut blocks = self.assign_blocks::<T>(cfg, root_node);
336        self.link_blocks(cfg, &mut blocks);
337
338        blocks
339    }
340
341    /// Assign blocks for all of the drops in the drop tree that need them.
342    fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>(
343        &mut self,
344        cfg: &mut CFG<'tcx>,
345        root_node: Option<BasicBlock>,
346    ) -> IndexVec<DropIdx, Option<BasicBlock>> {
347        // StorageDead statements can share blocks with each other and also with
348        // a Drop terminator. We iterate through the drops to find which drops
349        // need their own block.
350        #[derive(Clone, Copy)]
351        enum Block {
352            // This drop is unreachable
353            None,
354            // This drop is only reachable through the `StorageDead` with the
355            // specified index.
356            Shares(DropIdx),
357            // This drop has more than one way of being reached, or it is
358            // branched to from outside the tree, or its predecessor is a
359            // `Value` drop.
360            Own,
361        }
362
363        let mut blocks = IndexVec::from_elem(None, &self.drop_nodes);
364        blocks[ROOT_NODE] = root_node;
365
366        let mut needs_block = IndexVec::from_elem(Block::None, &self.drop_nodes);
367        if root_node.is_some() {
368            // In some cases (such as drops for `continue`) the root node
369            // already has a block. In this case, make sure that we don't
370            // override it.
371            needs_block[ROOT_NODE] = Block::Own;
372        }
373
374        // Sort so that we only need to check the last value.
375        let entry_points = &mut self.entry_points;
376        entry_points.sort();
377
378        for (drop_idx, drop_node) in self.drop_nodes.iter_enumerated().rev() {
379            if entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
380                let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
381                needs_block[drop_idx] = Block::Own;
382                while entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
383                    let entry_block = entry_points.pop().unwrap().1;
384                    T::link_entry_point(cfg, entry_block, block);
385                }
386            }
387            match needs_block[drop_idx] {
388                Block::None => continue,
389                Block::Own => {
390                    blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
391                }
392                Block::Shares(pred) => {
393                    blocks[drop_idx] = blocks[pred];
394                }
395            }
396            if let DropKind::Value = drop_node.data.kind {
397                needs_block[drop_node.next] = Block::Own;
398            } else if drop_idx != ROOT_NODE {
399                match &mut needs_block[drop_node.next] {
400                    pred @ Block::None => *pred = Block::Shares(drop_idx),
401                    pred @ Block::Shares(_) => *pred = Block::Own,
402                    Block::Own => (),
403                }
404            }
405        }
406
407        debug!("assign_blocks: blocks = {:#?}", blocks);
408        assert!(entry_points.is_empty());
409
410        blocks
411    }
412
413    fn link_blocks<'tcx>(
414        &self,
415        cfg: &mut CFG<'tcx>,
416        blocks: &IndexSlice<DropIdx, Option<BasicBlock>>,
417    ) {
418        for (drop_idx, drop_node) in self.drop_nodes.iter_enumerated().rev() {
419            let Some(block) = blocks[drop_idx] else { continue };
420            match drop_node.data.kind {
421                DropKind::Value => {
422                    let terminator = TerminatorKind::Drop {
423                        target: blocks[drop_node.next].unwrap(),
424                        // The caller will handle this if needed.
425                        unwind: UnwindAction::Terminate(UnwindTerminateReason::InCleanup),
426                        place: drop_node.data.local.into(),
427                        replace: false,
428                        drop: None,
429                        async_fut: None,
430                    };
431                    cfg.terminate(block, drop_node.data.source_info, terminator);
432                }
433                DropKind::ForLint => {
434                    let stmt = Statement::new(
435                        drop_node.data.source_info,
436                        StatementKind::BackwardIncompatibleDropHint {
437                            place: Box::new(drop_node.data.local.into()),
438                            reason: BackwardIncompatibleDropReason::Edition2024,
439                        },
440                    );
441                    cfg.push(block, stmt);
442                    let target = blocks[drop_node.next].unwrap();
443                    if target != block {
444                        // Diagnostics don't use this `Span` but debuginfo
445                        // might. Since we don't want breakpoints to be placed
446                        // here, especially when this is on an unwind path, we
447                        // use `DUMMY_SP`.
448                        let source_info =
449                            SourceInfo { span: DUMMY_SP, ..drop_node.data.source_info };
450                        let terminator = TerminatorKind::Goto { target };
451                        cfg.terminate(block, source_info, terminator);
452                    }
453                }
454                // Root nodes don't correspond to a drop.
455                DropKind::Storage if drop_idx == ROOT_NODE => {}
456                DropKind::Storage => {
457                    let stmt = Statement::new(
458                        drop_node.data.source_info,
459                        StatementKind::StorageDead(drop_node.data.local),
460                    );
461                    cfg.push(block, stmt);
462                    let target = blocks[drop_node.next].unwrap();
463                    if target != block {
464                        // Diagnostics don't use this `Span` but debuginfo
465                        // might. Since we don't want breakpoints to be placed
466                        // here, especially when this is on an unwind path, we
467                        // use `DUMMY_SP`.
468                        let source_info =
469                            SourceInfo { span: DUMMY_SP, ..drop_node.data.source_info };
470                        let terminator = TerminatorKind::Goto { target };
471                        cfg.terminate(block, source_info, terminator);
472                    }
473                }
474            }
475        }
476    }
477}
478
479impl<'tcx> Scopes<'tcx> {
480    pub(crate) fn new() -> Self {
481        Self {
482            scopes: Vec::new(),
483            breakable_scopes: Vec::new(),
484            const_continuable_scopes: Vec::new(),
485            if_then_scope: None,
486            unwind_drops: DropTree::new(),
487            coroutine_drops: DropTree::new(),
488        }
489    }
490
491    fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
492        debug!("push_scope({:?})", region_scope);
493        self.scopes.push(Scope {
494            source_scope: vis_scope,
495            region_scope: region_scope.0,
496            drops: vec![],
497            moved_locals: vec![],
498            cached_unwind_block: None,
499            cached_coroutine_drop_block: None,
500        });
501    }
502
503    fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope {
504        let scope = self.scopes.pop().unwrap();
505        assert_eq!(scope.region_scope, region_scope.0);
506        scope
507    }
508
509    fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize {
510        self.scopes
511            .iter()
512            .rposition(|scope| scope.region_scope == region_scope)
513            .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope))
514    }
515
516    /// Returns the topmost active scope, which is known to be alive until
517    /// the next scope expression.
518    fn topmost(&self) -> region::Scope {
519        self.scopes.last().expect("topmost_scope: no scopes present").region_scope
520    }
521}
522
523impl<'a, 'tcx> Builder<'a, 'tcx> {
524    // Adding and removing scopes
525    // ==========================
526
527    ///  Start a breakable scope, which tracks where `continue`, `break` and
528    ///  `return` should branch to.
529    pub(crate) fn in_breakable_scope<F>(
530        &mut self,
531        loop_block: Option<BasicBlock>,
532        break_destination: Place<'tcx>,
533        span: Span,
534        f: F,
535    ) -> BlockAnd<()>
536    where
537        F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,
538    {
539        let region_scope = self.scopes.topmost();
540        let scope = BreakableScope {
541            region_scope,
542            break_destination,
543            break_drops: DropTree::new(),
544            continue_drops: loop_block.map(|_| DropTree::new()),
545        };
546        self.scopes.breakable_scopes.push(scope);
547        let normal_exit_block = f(self);
548        let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
549        assert!(breakable_scope.region_scope == region_scope);
550        let break_block =
551            self.build_exit_tree(breakable_scope.break_drops, region_scope, span, None);
552        if let Some(drops) = breakable_scope.continue_drops {
553            self.build_exit_tree(drops, region_scope, span, loop_block);
554        }
555        match (normal_exit_block, break_block) {
556            (Some(block), None) | (None, Some(block)) => block,
557            (None, None) => self.cfg.start_new_block().unit(),
558            (Some(normal_block), Some(exit_block)) => {
559                let target = self.cfg.start_new_block();
560                let source_info = self.source_info(span);
561                self.cfg.terminate(
562                    normal_block.into_block(),
563                    source_info,
564                    TerminatorKind::Goto { target },
565                );
566                self.cfg.terminate(
567                    exit_block.into_block(),
568                    source_info,
569                    TerminatorKind::Goto { target },
570                );
571                target.unit()
572            }
573        }
574    }
575
576    /// Start a const-continuable scope, which tracks where `#[const_continue] break` should
577    /// branch to.
578    pub(crate) fn in_const_continuable_scope<F>(
579        &mut self,
580        arms: Box<[ArmId]>,
581        built_match_tree: BuiltMatchTree<'tcx>,
582        state_place: Place<'tcx>,
583        span: Span,
584        f: F,
585    ) -> BlockAnd<()>
586    where
587        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
588    {
589        let region_scope = self.scopes.topmost();
590        let scope = ConstContinuableScope {
591            region_scope,
592            state_place,
593            const_continue_drops: DropTree::new(),
594            arms,
595            built_match_tree,
596        };
597        self.scopes.const_continuable_scopes.push(scope);
598        let normal_exit_block = f(self);
599        let const_continue_scope = self.scopes.const_continuable_scopes.pop().unwrap();
600        assert!(const_continue_scope.region_scope == region_scope);
601
602        let break_block = self.build_exit_tree(
603            const_continue_scope.const_continue_drops,
604            region_scope,
605            span,
606            None,
607        );
608
609        match (normal_exit_block, break_block) {
610            (block, None) => block,
611            (normal_block, Some(exit_block)) => {
612                let target = self.cfg.start_new_block();
613                let source_info = self.source_info(span);
614                self.cfg.terminate(
615                    normal_block.into_block(),
616                    source_info,
617                    TerminatorKind::Goto { target },
618                );
619                self.cfg.terminate(
620                    exit_block.into_block(),
621                    source_info,
622                    TerminatorKind::Goto { target },
623                );
624                target.unit()
625            }
626        }
627    }
628
629    /// Start an if-then scope which tracks drop for `if` expressions and `if`
630    /// guards.
631    ///
632    /// For an if-let chain:
633    ///
634    /// if let Some(x) = a && let Some(y) = b && let Some(z) = c { ... }
635    ///
636    /// There are three possible ways the condition can be false and we may have
637    /// to drop `x`, `x` and `y`, or neither depending on which binding fails.
638    /// To handle this correctly we use a `DropTree` in a similar way to a
639    /// `loop` expression and 'break' out on all of the 'else' paths.
640    ///
641    /// Notes:
642    /// - We don't need to keep a stack of scopes in the `Builder` because the
643    ///   'else' paths will only leave the innermost scope.
644    /// - This is also used for match guards.
645    pub(crate) fn in_if_then_scope<F>(
646        &mut self,
647        region_scope: region::Scope,
648        span: Span,
649        f: F,
650    ) -> (BasicBlock, BasicBlock)
651    where
652        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
653    {
654        let scope = IfThenScope { region_scope, else_drops: DropTree::new() };
655        let previous_scope = mem::replace(&mut self.scopes.if_then_scope, Some(scope));
656
657        let then_block = f(self).into_block();
658
659        let if_then_scope = mem::replace(&mut self.scopes.if_then_scope, previous_scope).unwrap();
660        assert!(if_then_scope.region_scope == region_scope);
661
662        let else_block =
663            self.build_exit_tree(if_then_scope.else_drops, region_scope, span, None).map_or_else(
664                || self.cfg.start_new_block(),
665                |else_block_and| else_block_and.into_block(),
666            );
667
668        (then_block, else_block)
669    }
670
671    /// Convenience wrapper that pushes a scope and then executes `f`
672    /// to build its contents, popping the scope afterwards.
673    #[instrument(skip(self, f), level = "debug")]
674    pub(crate) fn in_scope<F, R>(
675        &mut self,
676        region_scope: (region::Scope, SourceInfo),
677        lint_level: LintLevel,
678        f: F,
679    ) -> BlockAnd<R>
680    where
681        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
682    {
683        let source_scope = self.source_scope;
684        if let LintLevel::Explicit(current_hir_id) = lint_level {
685            let parent_id =
686                self.source_scopes[source_scope].local_data.as_ref().unwrap_crate_local().lint_root;
687            self.maybe_new_source_scope(region_scope.1.span, current_hir_id, parent_id);
688        }
689        self.push_scope(region_scope);
690        let mut block;
691        let rv = unpack!(block = f(self));
692        block = self.pop_scope(region_scope, block).into_block();
693        self.source_scope = source_scope;
694        debug!(?block);
695        block.and(rv)
696    }
697
698    /// Push a scope onto the stack. You can then build code in this
699    /// scope and call `pop_scope` afterwards. Note that these two
700    /// calls must be paired; using `in_scope` as a convenience
701    /// wrapper maybe preferable.
702    pub(crate) fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
703        self.scopes.push_scope(region_scope, self.source_scope);
704    }
705
706    /// Pops a scope, which should have region scope `region_scope`,
707    /// adding any drops onto the end of `block` that are needed.
708    /// This must match 1-to-1 with `push_scope`.
709    pub(crate) fn pop_scope(
710        &mut self,
711        region_scope: (region::Scope, SourceInfo),
712        mut block: BasicBlock,
713    ) -> BlockAnd<()> {
714        debug!("pop_scope({:?}, {:?})", region_scope, block);
715
716        block = self.leave_top_scope(block);
717
718        self.scopes.pop_scope(region_scope);
719
720        block.unit()
721    }
722
723    /// Sets up the drops for breaking from `block` to `target`.
724    pub(crate) fn break_scope(
725        &mut self,
726        mut block: BasicBlock,
727        value: Option<ExprId>,
728        target: BreakableTarget,
729        source_info: SourceInfo,
730    ) -> BlockAnd<()> {
731        let span = source_info.span;
732
733        let get_scope_index = |scope: region::Scope| {
734            // find the loop-scope by its `region::Scope`.
735            self.scopes
736                .breakable_scopes
737                .iter()
738                .rposition(|breakable_scope| breakable_scope.region_scope == scope)
739                .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
740        };
741        let (break_index, destination) = match target {
742            BreakableTarget::Return => {
743                let scope = &self.scopes.breakable_scopes[0];
744                if scope.break_destination != Place::return_place() {
745                    span_bug!(span, "`return` in item with no return scope");
746                }
747                (0, Some(scope.break_destination))
748            }
749            BreakableTarget::Break(scope) => {
750                let break_index = get_scope_index(scope);
751                let scope = &self.scopes.breakable_scopes[break_index];
752                (break_index, Some(scope.break_destination))
753            }
754            BreakableTarget::Continue(scope) => {
755                let break_index = get_scope_index(scope);
756                (break_index, None)
757            }
758        };
759
760        match (destination, value) {
761            (Some(destination), Some(value)) => {
762                debug!("stmt_expr Break val block_context.push(SubExpr)");
763                self.block_context.push(BlockFrame::SubExpr);
764                block = self.expr_into_dest(destination, block, value).into_block();
765                self.block_context.pop();
766            }
767            (Some(destination), None) => {
768                self.cfg.push_assign_unit(block, source_info, destination, self.tcx)
769            }
770            (None, Some(_)) => {
771                panic!("`return`, `become` and `break` with value and must have a destination")
772            }
773            (None, None) => {
774                if self.tcx.sess.instrument_coverage() {
775                    // Normally we wouldn't build any MIR in this case, but that makes it
776                    // harder for coverage instrumentation to extract a relevant span for
777                    // `continue` expressions. So here we inject a dummy statement with the
778                    // desired span.
779                    self.cfg.push_coverage_span_marker(block, source_info);
780                }
781            }
782        }
783
784        let region_scope = self.scopes.breakable_scopes[break_index].region_scope;
785        let scope_index = self.scopes.scope_index(region_scope, span);
786        let drops = if destination.is_some() {
787            &mut self.scopes.breakable_scopes[break_index].break_drops
788        } else {
789            let Some(drops) = self.scopes.breakable_scopes[break_index].continue_drops.as_mut()
790            else {
791                self.tcx.dcx().span_delayed_bug(
792                    source_info.span,
793                    "unlabelled `continue` within labelled block",
794                );
795                self.cfg.terminate(block, source_info, TerminatorKind::Unreachable);
796
797                return self.cfg.start_new_block().unit();
798            };
799            drops
800        };
801
802        let mut drop_idx = ROOT_NODE;
803        for scope in &self.scopes.scopes[scope_index + 1..] {
804            for drop in &scope.drops {
805                drop_idx = drops.add_drop(*drop, drop_idx);
806            }
807        }
808        drops.add_entry_point(block, drop_idx);
809
810        // `build_drop_trees` doesn't have access to our source_info, so we
811        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
812        // because MIR type checking will panic if it hasn't been overwritten.
813        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
814        self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
815
816        self.cfg.start_new_block().unit()
817    }
818
819    /// Based on `FunctionCx::eval_unevaluated_mir_constant_to_valtree`.
820    fn eval_unevaluated_mir_constant_to_valtree(
821        &self,
822        constant: ConstOperand<'tcx>,
823    ) -> Result<(ty::ValTree<'tcx>, Ty<'tcx>), interpret::ErrorHandled> {
824        assert!(!constant.const_.ty().has_param());
825        let (uv, ty) = match constant.const_ {
826            mir::Const::Unevaluated(uv, ty) => (uv.shrink(), ty),
827            mir::Const::Ty(_, c) => match c.kind() {
828                // A constant that came from a const generic but was then used as an argument to
829                // old-style simd_shuffle (passing as argument instead of as a generic param).
830                ty::ConstKind::Value(cv) => return Ok((cv.valtree, cv.ty)),
831                other => span_bug!(constant.span, "{other:#?}"),
832            },
833            mir::Const::Val(mir::ConstValue::Scalar(mir::interpret::Scalar::Int(val)), ty) => {
834                return Ok((ValTree::from_scalar_int(self.tcx, val), ty));
835            }
836            // We should never encounter `Const::Val` unless MIR opts (like const prop) evaluate
837            // a constant and write that value back into `Operand`s. This could happen, but is
838            // unlikely. Also: all users of `simd_shuffle` are on unstable and already need to take
839            // a lot of care around intrinsics. For an issue to happen here, it would require a
840            // macro expanding to a `simd_shuffle` call without wrapping the constant argument in a
841            // `const {}` block, but the user pass through arbitrary expressions.
842
843            // FIXME(oli-obk): Replace the magic const generic argument of `simd_shuffle` with a
844            // real const generic, and get rid of this entire function.
845            other => span_bug!(constant.span, "{other:#?}"),
846        };
847
848        match self.tcx.const_eval_resolve_for_typeck(self.typing_env(), uv, constant.span) {
849            Ok(Ok(valtree)) => Ok((valtree, ty)),
850            Ok(Err(ty)) => span_bug!(constant.span, "could not convert {ty:?} to a valtree"),
851            Err(e) => Err(e),
852        }
853    }
854
855    /// Sets up the drops for jumping from `block` to `scope`.
856    pub(crate) fn break_const_continuable_scope(
857        &mut self,
858        mut block: BasicBlock,
859        value: ExprId,
860        scope: region::Scope,
861        source_info: SourceInfo,
862    ) -> BlockAnd<()> {
863        let span = source_info.span;
864
865        // A break can only break out of a scope, so the value should be a scope.
866        let rustc_middle::thir::ExprKind::Scope { value, .. } = self.thir[value].kind else {
867            span_bug!(span, "break value must be a scope")
868        };
869
870        let constant = match &self.thir[value].kind {
871            ExprKind::Adt(box AdtExpr { variant_index, fields, base, .. }) => {
872                assert!(matches!(base, AdtExprBase::None));
873                assert!(fields.is_empty());
874                ConstOperand {
875                    span: self.thir[value].span,
876                    user_ty: None,
877                    const_: Const::Ty(
878                        self.thir[value].ty,
879                        ty::Const::new_value(
880                            self.tcx,
881                            ValTree::from_branches(
882                                self.tcx,
883                                [ValTree::from_scalar_int(self.tcx, variant_index.as_u32().into())],
884                            ),
885                            self.thir[value].ty,
886                        ),
887                    ),
888                }
889            }
890            _ => self.as_constant(&self.thir[value]),
891        };
892
893        let break_index = self
894            .scopes
895            .const_continuable_scopes
896            .iter()
897            .rposition(|const_continuable_scope| const_continuable_scope.region_scope == scope)
898            .unwrap_or_else(|| span_bug!(span, "no enclosing const-continuable scope found"));
899
900        let scope = &self.scopes.const_continuable_scopes[break_index];
901
902        let state_decl = &self.local_decls[scope.state_place.as_local().unwrap()];
903        let state_ty = state_decl.ty;
904        let (discriminant_ty, rvalue) = match state_ty.kind() {
905            ty::Adt(adt_def, _) if adt_def.is_enum() => {
906                (state_ty.discriminant_ty(self.tcx), Rvalue::Discriminant(scope.state_place))
907            }
908            ty::Uint(_) | ty::Int(_) | ty::Float(_) | ty::Bool | ty::Char => {
909                (state_ty, Rvalue::Use(Operand::Copy(scope.state_place)))
910            }
911            _ => span_bug!(state_decl.source_info.span, "unsupported #[loop_match] state"),
912        };
913
914        // The `PatCtxt` is normally used in pattern exhaustiveness checking, but reused
915        // here because it performs normalization and const evaluation.
916        let dropless_arena = rustc_arena::DroplessArena::default();
917        let typeck_results = self.tcx.typeck(self.def_id);
918        let cx = RustcPatCtxt {
919            tcx: self.tcx,
920            typeck_results,
921            module: self.tcx.parent_module(self.hir_id).to_def_id(),
922            // FIXME(#132279): We're in a body, should handle opaques.
923            typing_env: rustc_middle::ty::TypingEnv::non_body_analysis(self.tcx, self.def_id),
924            dropless_arena: &dropless_arena,
925            match_lint_level: self.hir_id,
926            whole_match_span: Some(rustc_span::Span::default()),
927            scrut_span: rustc_span::Span::default(),
928            refutable: true,
929            known_valid_scrutinee: true,
930            internal_state: Default::default(),
931        };
932
933        let valtree = match self.eval_unevaluated_mir_constant_to_valtree(constant) {
934            Ok((valtree, ty)) => {
935                // Defensively check that the type is monomorphic.
936                assert!(!ty.has_param());
937
938                valtree
939            }
940            Err(ErrorHandled::Reported(..)) => {
941                return block.unit();
942            }
943            Err(ErrorHandled::TooGeneric(_)) => {
944                self.tcx.dcx().emit_fatal(ConstContinueBadConst { span: constant.span });
945            }
946        };
947
948        let Some(real_target) =
949            self.static_pattern_match(&cx, valtree, &*scope.arms, &scope.built_match_tree)
950        else {
951            self.tcx.dcx().emit_fatal(ConstContinueUnknownJumpTarget { span })
952        };
953
954        self.block_context.push(BlockFrame::SubExpr);
955        let state_place = scope.state_place;
956        block = self.expr_into_dest(state_place, block, value).into_block();
957        self.block_context.pop();
958
959        let discr = self.temp(discriminant_ty, source_info.span);
960        let scope_index = self
961            .scopes
962            .scope_index(self.scopes.const_continuable_scopes[break_index].region_scope, span);
963        let scope = &mut self.scopes.const_continuable_scopes[break_index];
964        self.cfg.push_assign(block, source_info, discr, rvalue);
965        let drop_and_continue_block = self.cfg.start_new_block();
966        let imaginary_target = self.cfg.start_new_block();
967        self.cfg.terminate(
968            block,
969            source_info,
970            TerminatorKind::FalseEdge { real_target: drop_and_continue_block, imaginary_target },
971        );
972
973        let drops = &mut scope.const_continue_drops;
974
975        let drop_idx = self.scopes.scopes[scope_index + 1..]
976            .iter()
977            .flat_map(|scope| &scope.drops)
978            .fold(ROOT_NODE, |drop_idx, &drop| drops.add_drop(drop, drop_idx));
979
980        drops.add_entry_point(imaginary_target, drop_idx);
981
982        self.cfg.terminate(imaginary_target, source_info, TerminatorKind::UnwindResume);
983
984        let region_scope = scope.region_scope;
985        let scope_index = self.scopes.scope_index(region_scope, span);
986        let mut drops = DropTree::new();
987
988        let drop_idx = self.scopes.scopes[scope_index + 1..]
989            .iter()
990            .flat_map(|scope| &scope.drops)
991            .fold(ROOT_NODE, |drop_idx, &drop| drops.add_drop(drop, drop_idx));
992
993        drops.add_entry_point(drop_and_continue_block, drop_idx);
994
995        // `build_drop_trees` doesn't have access to our source_info, so we
996        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
997        // because MIR type checking will panic if it hasn't been overwritten.
998        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
999        self.cfg.terminate(drop_and_continue_block, source_info, TerminatorKind::UnwindResume);
1000
1001        self.build_exit_tree(drops, region_scope, span, Some(real_target));
1002
1003        return self.cfg.start_new_block().unit();
1004    }
1005
1006    /// Sets up the drops for breaking from `block` due to an `if` condition
1007    /// that turned out to be false.
1008    ///
1009    /// Must be called in the context of [`Builder::in_if_then_scope`], so that
1010    /// there is an if-then scope to tell us what the target scope is.
1011    pub(crate) fn break_for_else(&mut self, block: BasicBlock, source_info: SourceInfo) {
1012        let if_then_scope = self
1013            .scopes
1014            .if_then_scope
1015            .as_ref()
1016            .unwrap_or_else(|| span_bug!(source_info.span, "no if-then scope found"));
1017
1018        let target = if_then_scope.region_scope;
1019        let scope_index = self.scopes.scope_index(target, source_info.span);
1020
1021        // Upgrade `if_then_scope` to `&mut`.
1022        let if_then_scope = self.scopes.if_then_scope.as_mut().expect("upgrading & to &mut");
1023
1024        let mut drop_idx = ROOT_NODE;
1025        let drops = &mut if_then_scope.else_drops;
1026        for scope in &self.scopes.scopes[scope_index + 1..] {
1027            for drop in &scope.drops {
1028                drop_idx = drops.add_drop(*drop, drop_idx);
1029            }
1030        }
1031        drops.add_entry_point(block, drop_idx);
1032
1033        // `build_drop_trees` doesn't have access to our source_info, so we
1034        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
1035        // because MIR type checking will panic if it hasn't been overwritten.
1036        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
1037        self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
1038    }
1039
1040    /// Sets up the drops for explicit tail calls.
1041    ///
1042    /// Unlike other kinds of early exits, tail calls do not go through the drop tree.
1043    /// Instead, all scheduled drops are immediately added to the CFG.
1044    pub(crate) fn break_for_tail_call(
1045        &mut self,
1046        mut block: BasicBlock,
1047        args: &[Spanned<Operand<'tcx>>],
1048        source_info: SourceInfo,
1049    ) -> BlockAnd<()> {
1050        let arg_drops: Vec<_> = args
1051            .iter()
1052            .rev()
1053            .filter_map(|arg| match &arg.node {
1054                Operand::Copy(_) => bug!("copy op in tail call args"),
1055                Operand::Move(place) => {
1056                    let local =
1057                        place.as_local().unwrap_or_else(|| bug!("projection in tail call args"));
1058
1059                    if !self.local_decls[local].ty.needs_drop(self.tcx, self.typing_env()) {
1060                        return None;
1061                    }
1062
1063                    Some(DropData { source_info, local, kind: DropKind::Value })
1064                }
1065                Operand::Constant(_) => None,
1066            })
1067            .collect();
1068
1069        let mut unwind_to = self.diverge_cleanup_target(
1070            self.scopes.scopes.iter().rev().nth(1).unwrap().region_scope,
1071            DUMMY_SP,
1072        );
1073        let typing_env = self.typing_env();
1074        let unwind_drops = &mut self.scopes.unwind_drops;
1075
1076        // the innermost scope contains only the destructors for the tail call arguments
1077        // we only want to drop these in case of a panic, so we skip it
1078        for scope in self.scopes.scopes[1..].iter().rev().skip(1) {
1079            // FIXME(explicit_tail_calls) code duplication with `build_scope_drops`
1080            for drop_data in scope.drops.iter().rev() {
1081                let source_info = drop_data.source_info;
1082                let local = drop_data.local;
1083
1084                if !self.local_decls[local].ty.needs_drop(self.tcx, typing_env) {
1085                    continue;
1086                }
1087
1088                match drop_data.kind {
1089                    DropKind::Value => {
1090                        // `unwind_to` should drop the value that we're about to
1091                        // schedule. If dropping this value panics, then we continue
1092                        // with the *next* value on the unwind path.
1093                        debug_assert_eq!(
1094                            unwind_drops.drop_nodes[unwind_to].data.local,
1095                            drop_data.local
1096                        );
1097                        debug_assert_eq!(
1098                            unwind_drops.drop_nodes[unwind_to].data.kind,
1099                            drop_data.kind
1100                        );
1101                        unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1102
1103                        let mut unwind_entry_point = unwind_to;
1104
1105                        // the tail call arguments must be dropped if any of these drops panic
1106                        for drop in arg_drops.iter().copied() {
1107                            unwind_entry_point = unwind_drops.add_drop(drop, unwind_entry_point);
1108                        }
1109
1110                        unwind_drops.add_entry_point(block, unwind_entry_point);
1111
1112                        let next = self.cfg.start_new_block();
1113                        self.cfg.terminate(
1114                            block,
1115                            source_info,
1116                            TerminatorKind::Drop {
1117                                place: local.into(),
1118                                target: next,
1119                                unwind: UnwindAction::Continue,
1120                                replace: false,
1121                                drop: None,
1122                                async_fut: None,
1123                            },
1124                        );
1125                        block = next;
1126                    }
1127                    DropKind::ForLint => {
1128                        self.cfg.push(
1129                            block,
1130                            Statement::new(
1131                                source_info,
1132                                StatementKind::BackwardIncompatibleDropHint {
1133                                    place: Box::new(local.into()),
1134                                    reason: BackwardIncompatibleDropReason::Edition2024,
1135                                },
1136                            ),
1137                        );
1138                    }
1139                    DropKind::Storage => {
1140                        // Only temps and vars need their storage dead.
1141                        assert!(local.index() > self.arg_count);
1142                        self.cfg.push(
1143                            block,
1144                            Statement::new(source_info, StatementKind::StorageDead(local)),
1145                        );
1146                    }
1147                }
1148            }
1149        }
1150
1151        block.unit()
1152    }
1153
1154    fn is_async_drop_impl(
1155        tcx: TyCtxt<'tcx>,
1156        local_decls: &IndexVec<Local, LocalDecl<'tcx>>,
1157        typing_env: ty::TypingEnv<'tcx>,
1158        local: Local,
1159    ) -> bool {
1160        let ty = local_decls[local].ty;
1161        if ty.is_async_drop(tcx, typing_env) || ty.is_coroutine() {
1162            return true;
1163        }
1164        ty.needs_async_drop(tcx, typing_env)
1165    }
1166    fn is_async_drop(&self, local: Local) -> bool {
1167        Self::is_async_drop_impl(self.tcx, &self.local_decls, self.typing_env(), local)
1168    }
1169
1170    fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock {
1171        // If we are emitting a `drop` statement, we need to have the cached
1172        // diverge cleanup pads ready in case that drop panics.
1173        let needs_cleanup = self.scopes.scopes.last().is_some_and(|scope| scope.needs_cleanup());
1174        let is_coroutine = self.coroutine.is_some();
1175        let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX };
1176
1177        let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
1178        let has_async_drops = is_coroutine
1179            && scope.drops.iter().any(|v| v.kind == DropKind::Value && self.is_async_drop(v.local));
1180        let dropline_to = if has_async_drops { Some(self.diverge_dropline()) } else { None };
1181        let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
1182        let typing_env = self.typing_env();
1183        build_scope_drops(
1184            &mut self.cfg,
1185            &mut self.scopes.unwind_drops,
1186            &mut self.scopes.coroutine_drops,
1187            scope,
1188            block,
1189            unwind_to,
1190            dropline_to,
1191            is_coroutine && needs_cleanup,
1192            self.arg_count,
1193            |v: Local| Self::is_async_drop_impl(self.tcx, &self.local_decls, typing_env, v),
1194        )
1195        .into_block()
1196    }
1197
1198    /// Possibly creates a new source scope if `current_root` and `parent_root`
1199    /// are different, or if -Zmaximal-hir-to-mir-coverage is enabled.
1200    pub(crate) fn maybe_new_source_scope(
1201        &mut self,
1202        span: Span,
1203        current_id: HirId,
1204        parent_id: HirId,
1205    ) {
1206        let (current_root, parent_root) =
1207            if self.tcx.sess.opts.unstable_opts.maximal_hir_to_mir_coverage {
1208                // Some consumers of rustc need to map MIR locations back to HIR nodes. Currently
1209                // the only part of rustc that tracks MIR -> HIR is the
1210                // `SourceScopeLocalData::lint_root` field that tracks lint levels for MIR
1211                // locations. Normally the number of source scopes is limited to the set of nodes
1212                // with lint annotations. The -Zmaximal-hir-to-mir-coverage flag changes this
1213                // behavior to maximize the number of source scopes, increasing the granularity of
1214                // the MIR->HIR mapping.
1215                (current_id, parent_id)
1216            } else {
1217                // Use `maybe_lint_level_root_bounded` to avoid adding Hir dependencies on our
1218                // parents. We estimate the true lint roots here to avoid creating a lot of source
1219                // scopes.
1220                (
1221                    self.maybe_lint_level_root_bounded(current_id),
1222                    if parent_id == self.hir_id {
1223                        parent_id // this is very common
1224                    } else {
1225                        self.maybe_lint_level_root_bounded(parent_id)
1226                    },
1227                )
1228            };
1229
1230        if current_root != parent_root {
1231            let lint_level = LintLevel::Explicit(current_root);
1232            self.source_scope = self.new_source_scope(span, lint_level);
1233        }
1234    }
1235
1236    /// Walks upwards from `orig_id` to find a node which might change lint levels with attributes.
1237    /// It stops at `self.hir_id` and just returns it if reached.
1238    fn maybe_lint_level_root_bounded(&mut self, orig_id: HirId) -> HirId {
1239        // This assertion lets us just store `ItemLocalId` in the cache, rather
1240        // than the full `HirId`.
1241        assert_eq!(orig_id.owner, self.hir_id.owner);
1242
1243        let mut id = orig_id;
1244        loop {
1245            if id == self.hir_id {
1246                // This is a moderately common case, mostly hit for previously unseen nodes.
1247                break;
1248            }
1249
1250            if self.tcx.hir_attrs(id).iter().any(|attr| Level::from_attr(attr).is_some()) {
1251                // This is a rare case. It's for a node path that doesn't reach the root due to an
1252                // intervening lint level attribute. This result doesn't get cached.
1253                return id;
1254            }
1255
1256            let next = self.tcx.parent_hir_id(id);
1257            if next == id {
1258                bug!("lint traversal reached the root of the crate");
1259            }
1260            id = next;
1261
1262            // This lookup is just an optimization; it can be removed without affecting
1263            // functionality. It might seem strange to see this at the end of this loop, but the
1264            // `orig_id` passed in to this function is almost always previously unseen, for which a
1265            // lookup will be a miss. So we only do lookups for nodes up the parent chain, where
1266            // cache lookups have a very high hit rate.
1267            if self.lint_level_roots_cache.contains(id.local_id) {
1268                break;
1269            }
1270        }
1271
1272        // `orig_id` traced to `self_id`; record this fact. If `orig_id` is a leaf node it will
1273        // rarely (never?) subsequently be searched for, but it's hard to know if that is the case.
1274        // The performance wins from the cache all come from caching non-leaf nodes.
1275        self.lint_level_roots_cache.insert(orig_id.local_id);
1276        self.hir_id
1277    }
1278
1279    /// Creates a new source scope, nested in the current one.
1280    pub(crate) fn new_source_scope(&mut self, span: Span, lint_level: LintLevel) -> SourceScope {
1281        let parent = self.source_scope;
1282        debug!(
1283            "new_source_scope({:?}, {:?}) - parent({:?})={:?}",
1284            span,
1285            lint_level,
1286            parent,
1287            self.source_scopes.get(parent)
1288        );
1289        let scope_local_data = SourceScopeLocalData {
1290            lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
1291                lint_root
1292            } else {
1293                self.source_scopes[parent].local_data.as_ref().unwrap_crate_local().lint_root
1294            },
1295        };
1296        self.source_scopes.push(SourceScopeData {
1297            span,
1298            parent_scope: Some(parent),
1299            inlined: None,
1300            inlined_parent_scope: None,
1301            local_data: ClearCrossCrate::Set(scope_local_data),
1302        })
1303    }
1304
1305    /// Given a span and the current source scope, make a SourceInfo.
1306    pub(crate) fn source_info(&self, span: Span) -> SourceInfo {
1307        SourceInfo { span, scope: self.source_scope }
1308    }
1309
1310    // Finding scopes
1311    // ==============
1312
1313    /// Returns the scope that we should use as the lifetime of an
1314    /// operand. Basically, an operand must live until it is consumed.
1315    /// This is similar to, but not quite the same as, the temporary
1316    /// scope (which can be larger or smaller).
1317    ///
1318    /// Consider:
1319    /// ```ignore (illustrative)
1320    /// let x = foo(bar(X, Y));
1321    /// ```
1322    /// We wish to pop the storage for X and Y after `bar()` is
1323    /// called, not after the whole `let` is completed.
1324    ///
1325    /// As another example, if the second argument diverges:
1326    /// ```ignore (illustrative)
1327    /// foo(Box::new(2), panic!())
1328    /// ```
1329    /// We would allocate the box but then free it on the unwinding
1330    /// path; we would also emit a free on the 'success' path from
1331    /// panic, but that will turn out to be removed as dead-code.
1332    pub(crate) fn local_scope(&self) -> region::Scope {
1333        self.scopes.topmost()
1334    }
1335
1336    // Scheduling drops
1337    // ================
1338
1339    pub(crate) fn schedule_drop_storage_and_value(
1340        &mut self,
1341        span: Span,
1342        region_scope: region::Scope,
1343        local: Local,
1344    ) {
1345        self.schedule_drop(span, region_scope, local, DropKind::Storage);
1346        self.schedule_drop(span, region_scope, local, DropKind::Value);
1347    }
1348
1349    /// Indicates that `place` should be dropped on exit from `region_scope`.
1350    ///
1351    /// When called with `DropKind::Storage`, `place` shouldn't be the return
1352    /// place, or a function parameter.
1353    pub(crate) fn schedule_drop(
1354        &mut self,
1355        span: Span,
1356        region_scope: region::Scope,
1357        local: Local,
1358        drop_kind: DropKind,
1359    ) {
1360        let needs_drop = match drop_kind {
1361            DropKind::Value | DropKind::ForLint => {
1362                if !self.local_decls[local].ty.needs_drop(self.tcx, self.typing_env()) {
1363                    return;
1364                }
1365                true
1366            }
1367            DropKind::Storage => {
1368                if local.index() <= self.arg_count {
1369                    span_bug!(
1370                        span,
1371                        "`schedule_drop` called with body argument {:?} \
1372                        but its storage does not require a drop",
1373                        local,
1374                    )
1375                }
1376                false
1377            }
1378        };
1379
1380        // When building drops, we try to cache chains of drops to reduce the
1381        // number of `DropTree::add_drop` calls. This, however, means that
1382        // whenever we add a drop into a scope which already had some entries
1383        // in the drop tree built (and thus, cached) for it, we must invalidate
1384        // all caches which might branch into the scope which had a drop just
1385        // added to it. This is necessary, because otherwise some other code
1386        // might use the cache to branch into already built chain of drops,
1387        // essentially ignoring the newly added drop.
1388        //
1389        // For example consider there’s two scopes with a drop in each. These
1390        // are built and thus the caches are filled:
1391        //
1392        // +--------------------------------------------------------+
1393        // | +---------------------------------+                    |
1394        // | | +--------+     +-------------+  |  +---------------+ |
1395        // | | | return | <-+ | drop(outer) | <-+ |  drop(middle) | |
1396        // | | +--------+     +-------------+  |  +---------------+ |
1397        // | +------------|outer_scope cache|--+                    |
1398        // +------------------------------|middle_scope cache|------+
1399        //
1400        // Now, a new, innermost scope is added along with a new drop into
1401        // both innermost and outermost scopes:
1402        //
1403        // +------------------------------------------------------------+
1404        // | +----------------------------------+                       |
1405        // | | +--------+      +-------------+  |   +---------------+   | +-------------+
1406        // | | | return | <+   | drop(new)   | <-+  |  drop(middle) | <--+| drop(inner) |
1407        // | | +--------+  |   | drop(outer) |  |   +---------------+   | +-------------+
1408        // | |             +-+ +-------------+  |                       |
1409        // | +---|invalid outer_scope cache|----+                       |
1410        // +----=----------------|invalid middle_scope cache|-----------+
1411        //
1412        // If, when adding `drop(new)` we do not invalidate the cached blocks for both
1413        // outer_scope and middle_scope, then, when building drops for the inner (rightmost)
1414        // scope, the old, cached blocks, without `drop(new)` will get used, producing the
1415        // wrong results.
1416        //
1417        // Note that this code iterates scopes from the innermost to the outermost,
1418        // invalidating caches of each scope visited. This way bare minimum of the
1419        // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
1420        // cache of outer scope stays intact.
1421        //
1422        // Since we only cache drops for the unwind path and the coroutine drop
1423        // path, we only need to invalidate the cache for drops that happen on
1424        // the unwind or coroutine drop paths. This means that for
1425        // non-coroutines we don't need to invalidate caches for `DropKind::Storage`.
1426        let invalidate_caches = needs_drop || self.coroutine.is_some();
1427        for scope in self.scopes.scopes.iter_mut().rev() {
1428            if invalidate_caches {
1429                scope.invalidate_cache();
1430            }
1431
1432            if scope.region_scope == region_scope {
1433                let region_scope_span = region_scope.span(self.tcx, self.region_scope_tree);
1434                // Attribute scope exit drops to scope's closing brace.
1435                let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
1436
1437                scope.drops.push(DropData {
1438                    source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
1439                    local,
1440                    kind: drop_kind,
1441                });
1442
1443                return;
1444            }
1445        }
1446
1447        span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
1448    }
1449
1450    /// Schedule emission of a backwards incompatible drop lint hint.
1451    /// Applicable only to temporary values for now.
1452    #[instrument(level = "debug", skip(self))]
1453    pub(crate) fn schedule_backwards_incompatible_drop(
1454        &mut self,
1455        span: Span,
1456        region_scope: region::Scope,
1457        local: Local,
1458    ) {
1459        // Note that we are *not* gating BIDs here on whether they have significant destructor.
1460        // We need to know all of them so that we can capture potential borrow-checking errors.
1461        for scope in self.scopes.scopes.iter_mut().rev() {
1462            // Since we are inserting linting MIR statement, we have to invalidate the caches
1463            scope.invalidate_cache();
1464            if scope.region_scope == region_scope {
1465                let region_scope_span = region_scope.span(self.tcx, self.region_scope_tree);
1466                let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
1467
1468                scope.drops.push(DropData {
1469                    source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
1470                    local,
1471                    kind: DropKind::ForLint,
1472                });
1473
1474                return;
1475            }
1476        }
1477        span_bug!(
1478            span,
1479            "region scope {:?} not in scope to drop {:?} for linting",
1480            region_scope,
1481            local
1482        );
1483    }
1484
1485    /// Indicates that the "local operand" stored in `local` is
1486    /// *moved* at some point during execution (see `local_scope` for
1487    /// more information about what a "local operand" is -- in short,
1488    /// it's an intermediate operand created as part of preparing some
1489    /// MIR instruction). We use this information to suppress
1490    /// redundant drops on the non-unwind paths. This results in less
1491    /// MIR, but also avoids spurious borrow check errors
1492    /// (c.f. #64391).
1493    ///
1494    /// Example: when compiling the call to `foo` here:
1495    ///
1496    /// ```ignore (illustrative)
1497    /// foo(bar(), ...)
1498    /// ```
1499    ///
1500    /// we would evaluate `bar()` to an operand `_X`. We would also
1501    /// schedule `_X` to be dropped when the expression scope for
1502    /// `foo(bar())` is exited. This is relevant, for example, if the
1503    /// later arguments should unwind (it would ensure that `_X` gets
1504    /// dropped). However, if no unwind occurs, then `_X` will be
1505    /// unconditionally consumed by the `call`:
1506    ///
1507    /// ```ignore (illustrative)
1508    /// bb {
1509    ///   ...
1510    ///   _R = CALL(foo, _X, ...)
1511    /// }
1512    /// ```
1513    ///
1514    /// However, `_X` is still registered to be dropped, and so if we
1515    /// do nothing else, we would generate a `DROP(_X)` that occurs
1516    /// after the call. This will later be optimized out by the
1517    /// drop-elaboration code, but in the meantime it can lead to
1518    /// spurious borrow-check errors -- the problem, ironically, is
1519    /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
1520    /// that it creates. See #64391 for an example.
1521    pub(crate) fn record_operands_moved(&mut self, operands: &[Spanned<Operand<'tcx>>]) {
1522        let local_scope = self.local_scope();
1523        let scope = self.scopes.scopes.last_mut().unwrap();
1524
1525        assert_eq!(scope.region_scope, local_scope, "local scope is not the topmost scope!",);
1526
1527        // look for moves of a local variable, like `MOVE(_X)`
1528        let locals_moved = operands.iter().flat_map(|operand| match operand.node {
1529            Operand::Copy(_) | Operand::Constant(_) => None,
1530            Operand::Move(place) => place.as_local(),
1531        });
1532
1533        for local in locals_moved {
1534            // check if we have a Drop for this operand and -- if so
1535            // -- add it to the list of moved operands. Note that this
1536            // local might not have been an operand created for this
1537            // call, it could come from other places too.
1538            if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
1539                scope.moved_locals.push(local);
1540            }
1541        }
1542    }
1543
1544    // Other
1545    // =====
1546
1547    /// Returns the [DropIdx] for the innermost drop if the function unwound at
1548    /// this point. The `DropIdx` will be created if it doesn't already exist.
1549    fn diverge_cleanup(&mut self) -> DropIdx {
1550        // It is okay to use dummy span because the getting scope index on the topmost scope
1551        // must always succeed.
1552        self.diverge_cleanup_target(self.scopes.topmost(), DUMMY_SP)
1553    }
1554
1555    /// This is similar to [diverge_cleanup](Self::diverge_cleanup) except its target is set to
1556    /// some ancestor scope instead of the current scope.
1557    /// It is possible to unwind to some ancestor scope if some drop panics as
1558    /// the program breaks out of a if-then scope.
1559    fn diverge_cleanup_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1560        let target = self.scopes.scope_index(target_scope, span);
1561        let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1562            .iter()
1563            .enumerate()
1564            .rev()
1565            .find_map(|(scope_idx, scope)| {
1566                scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block))
1567            })
1568            .unwrap_or((0, ROOT_NODE));
1569
1570        if uncached_scope > target {
1571            return cached_drop;
1572        }
1573
1574        let is_coroutine = self.coroutine.is_some();
1575        for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1576            for drop in &scope.drops {
1577                if is_coroutine || drop.kind == DropKind::Value {
1578                    cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop);
1579                }
1580            }
1581            scope.cached_unwind_block = Some(cached_drop);
1582        }
1583
1584        cached_drop
1585    }
1586
1587    /// Prepares to create a path that performs all required cleanup for a
1588    /// terminator that can unwind at the given basic block.
1589    ///
1590    /// This path terminates in Resume. The path isn't created until after all
1591    /// of the non-unwind paths in this item have been lowered.
1592    pub(crate) fn diverge_from(&mut self, start: BasicBlock) {
1593        debug_assert!(
1594            matches!(
1595                self.cfg.block_data(start).terminator().kind,
1596                TerminatorKind::Assert { .. }
1597                    | TerminatorKind::Call { .. }
1598                    | TerminatorKind::Drop { .. }
1599                    | TerminatorKind::FalseUnwind { .. }
1600                    | TerminatorKind::InlineAsm { .. }
1601            ),
1602            "diverge_from called on block with terminator that cannot unwind."
1603        );
1604
1605        let next_drop = self.diverge_cleanup();
1606        self.scopes.unwind_drops.add_entry_point(start, next_drop);
1607    }
1608
1609    /// Returns the [DropIdx] for the innermost drop for dropline (coroutine drop path).
1610    /// The `DropIdx` will be created if it doesn't already exist.
1611    fn diverge_dropline(&mut self) -> DropIdx {
1612        // It is okay to use dummy span because the getting scope index on the topmost scope
1613        // must always succeed.
1614        self.diverge_dropline_target(self.scopes.topmost(), DUMMY_SP)
1615    }
1616
1617    /// Similar to diverge_cleanup_target, but for dropline (coroutine drop path)
1618    fn diverge_dropline_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1619        debug_assert!(
1620            self.coroutine.is_some(),
1621            "diverge_dropline_target is valid only for coroutine"
1622        );
1623        let target = self.scopes.scope_index(target_scope, span);
1624        let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1625            .iter()
1626            .enumerate()
1627            .rev()
1628            .find_map(|(scope_idx, scope)| {
1629                scope.cached_coroutine_drop_block.map(|cached_block| (scope_idx + 1, cached_block))
1630            })
1631            .unwrap_or((0, ROOT_NODE));
1632
1633        if uncached_scope > target {
1634            return cached_drop;
1635        }
1636
1637        for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1638            for drop in &scope.drops {
1639                cached_drop = self.scopes.coroutine_drops.add_drop(*drop, cached_drop);
1640            }
1641            scope.cached_coroutine_drop_block = Some(cached_drop);
1642        }
1643
1644        cached_drop
1645    }
1646
1647    /// Sets up a path that performs all required cleanup for dropping a
1648    /// coroutine, starting from the given block that ends in
1649    /// [TerminatorKind::Yield].
1650    ///
1651    /// This path terminates in CoroutineDrop.
1652    pub(crate) fn coroutine_drop_cleanup(&mut self, yield_block: BasicBlock) {
1653        debug_assert!(
1654            matches!(
1655                self.cfg.block_data(yield_block).terminator().kind,
1656                TerminatorKind::Yield { .. }
1657            ),
1658            "coroutine_drop_cleanup called on block with non-yield terminator."
1659        );
1660        let cached_drop = self.diverge_dropline();
1661        self.scopes.coroutine_drops.add_entry_point(yield_block, cached_drop);
1662    }
1663
1664    /// Utility function for *non*-scope code to build their own drops
1665    /// Force a drop at this point in the MIR by creating a new block.
1666    pub(crate) fn build_drop_and_replace(
1667        &mut self,
1668        block: BasicBlock,
1669        span: Span,
1670        place: Place<'tcx>,
1671        value: Rvalue<'tcx>,
1672    ) -> BlockAnd<()> {
1673        let source_info = self.source_info(span);
1674
1675        // create the new block for the assignment
1676        let assign = self.cfg.start_new_block();
1677        self.cfg.push_assign(assign, source_info, place, value.clone());
1678
1679        // create the new block for the assignment in the case of unwinding
1680        let assign_unwind = self.cfg.start_new_cleanup_block();
1681        self.cfg.push_assign(assign_unwind, source_info, place, value.clone());
1682
1683        self.cfg.terminate(
1684            block,
1685            source_info,
1686            TerminatorKind::Drop {
1687                place,
1688                target: assign,
1689                unwind: UnwindAction::Cleanup(assign_unwind),
1690                replace: true,
1691                drop: None,
1692                async_fut: None,
1693            },
1694        );
1695        self.diverge_from(block);
1696
1697        assign.unit()
1698    }
1699
1700    /// Creates an `Assert` terminator and return the success block.
1701    /// If the boolean condition operand is not the expected value,
1702    /// a runtime panic will be caused with the given message.
1703    pub(crate) fn assert(
1704        &mut self,
1705        block: BasicBlock,
1706        cond: Operand<'tcx>,
1707        expected: bool,
1708        msg: AssertMessage<'tcx>,
1709        span: Span,
1710    ) -> BasicBlock {
1711        let source_info = self.source_info(span);
1712        let success_block = self.cfg.start_new_block();
1713
1714        self.cfg.terminate(
1715            block,
1716            source_info,
1717            TerminatorKind::Assert {
1718                cond,
1719                expected,
1720                msg: Box::new(msg),
1721                target: success_block,
1722                unwind: UnwindAction::Continue,
1723            },
1724        );
1725        self.diverge_from(block);
1726
1727        success_block
1728    }
1729
1730    /// Unschedules any drops in the top scope.
1731    ///
1732    /// This is only needed for `match` arm scopes, because they have one
1733    /// entrance per pattern, but only one exit.
1734    pub(crate) fn clear_top_scope(&mut self, region_scope: region::Scope) {
1735        let top_scope = self.scopes.scopes.last_mut().unwrap();
1736
1737        assert_eq!(top_scope.region_scope, region_scope);
1738
1739        top_scope.drops.clear();
1740        top_scope.invalidate_cache();
1741    }
1742}
1743
1744/// Builds drops for `pop_scope` and `leave_top_scope`.
1745///
1746/// # Parameters
1747///
1748/// * `unwind_drops`, the drop tree data structure storing what needs to be cleaned up if unwind occurs
1749/// * `scope`, describes the drops that will occur on exiting the scope in regular execution
1750/// * `block`, the block to branch to once drops are complete (assuming no unwind occurs)
1751/// * `unwind_to`, describes the drops that would occur at this point in the code if a
1752///   panic occurred (a subset of the drops in `scope`, since we sometimes elide StorageDead and other
1753///   instructions on unwinding)
1754/// * `dropline_to`, describes the drops that would occur at this point in the code if a
1755///    coroutine drop occurred.
1756/// * `storage_dead_on_unwind`, if true, then we should emit `StorageDead` even when unwinding
1757/// * `arg_count`, number of MIR local variables corresponding to fn arguments (used to assert that we don't drop those)
1758fn build_scope_drops<'tcx, F>(
1759    cfg: &mut CFG<'tcx>,
1760    unwind_drops: &mut DropTree,
1761    coroutine_drops: &mut DropTree,
1762    scope: &Scope,
1763    block: BasicBlock,
1764    unwind_to: DropIdx,
1765    dropline_to: Option<DropIdx>,
1766    storage_dead_on_unwind: bool,
1767    arg_count: usize,
1768    is_async_drop: F,
1769) -> BlockAnd<()>
1770where
1771    F: Fn(Local) -> bool,
1772{
1773    debug!("build_scope_drops({:?} -> {:?}), dropline_to={:?}", block, scope, dropline_to);
1774
1775    // Build up the drops in evaluation order. The end result will
1776    // look like:
1777    //
1778    // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1779    //               |                    |                 |
1780    //               :                    |                 |
1781    //                                    V                 V
1782    // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1783    //
1784    // The horizontal arrows represent the execution path when the drops return
1785    // successfully. The downwards arrows represent the execution path when the
1786    // drops panic (panicking while unwinding will abort, so there's no need for
1787    // another set of arrows).
1788    //
1789    // For coroutines, we unwind from a drop on a local to its StorageDead
1790    // statement. For other functions we don't worry about StorageDead. The
1791    // drops for the unwind path should have already been generated by
1792    // `diverge_cleanup_gen`.
1793
1794    // `unwind_to` indicates what needs to be dropped should unwinding occur.
1795    // This is a subset of what needs to be dropped when exiting the scope.
1796    // As we unwind the scope, we will also move `unwind_to` backwards to match,
1797    // so that we can use it should a destructor panic.
1798    let mut unwind_to = unwind_to;
1799
1800    // The block that we should jump to after drops complete. We start by building the final drop (`drops[n]`
1801    // in the diagram above) and then build the drops (e.g., `drop[1]`, `drop[0]`) that come before it.
1802    // block begins as the successor of `drops[n]` and then becomes `drops[n]` so that `drops[n-1]`
1803    // will branch to `drops[n]`.
1804    let mut block = block;
1805
1806    // `dropline_to` indicates what needs to be dropped should coroutine drop occur.
1807    let mut dropline_to = dropline_to;
1808
1809    for drop_data in scope.drops.iter().rev() {
1810        let source_info = drop_data.source_info;
1811        let local = drop_data.local;
1812
1813        match drop_data.kind {
1814            DropKind::Value => {
1815                // `unwind_to` should drop the value that we're about to
1816                // schedule. If dropping this value panics, then we continue
1817                // with the *next* value on the unwind path.
1818                //
1819                // We adjust this BEFORE we create the drop (e.g., `drops[n]`)
1820                // because `drops[n]` should unwind to `drops[n-1]`.
1821                debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.local, drop_data.local);
1822                debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.kind, drop_data.kind);
1823                unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1824
1825                if let Some(idx) = dropline_to {
1826                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.local, drop_data.local);
1827                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.kind, drop_data.kind);
1828                    dropline_to = Some(coroutine_drops.drop_nodes[idx].next);
1829                }
1830
1831                // If the operand has been moved, and we are not on an unwind
1832                // path, then don't generate the drop. (We only take this into
1833                // account for non-unwind paths so as not to disturb the
1834                // caching mechanism.)
1835                if scope.moved_locals.contains(&local) {
1836                    continue;
1837                }
1838
1839                unwind_drops.add_entry_point(block, unwind_to);
1840                if let Some(to) = dropline_to
1841                    && is_async_drop(local)
1842                {
1843                    coroutine_drops.add_entry_point(block, to);
1844                }
1845
1846                let next = cfg.start_new_block();
1847                cfg.terminate(
1848                    block,
1849                    source_info,
1850                    TerminatorKind::Drop {
1851                        place: local.into(),
1852                        target: next,
1853                        unwind: UnwindAction::Continue,
1854                        replace: false,
1855                        drop: None,
1856                        async_fut: None,
1857                    },
1858                );
1859                block = next;
1860            }
1861            DropKind::ForLint => {
1862                // As in the `DropKind::Storage` case below:
1863                // normally lint-related drops are not emitted for unwind,
1864                // so we can just leave `unwind_to` unmodified, but in some
1865                // cases we emit things ALSO on the unwind path, so we need to adjust
1866                // `unwind_to` in that case.
1867                if storage_dead_on_unwind {
1868                    debug_assert_eq!(
1869                        unwind_drops.drop_nodes[unwind_to].data.local,
1870                        drop_data.local
1871                    );
1872                    debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.kind, drop_data.kind);
1873                    unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1874                }
1875
1876                // If the operand has been moved, and we are not on an unwind
1877                // path, then don't generate the drop. (We only take this into
1878                // account for non-unwind paths so as not to disturb the
1879                // caching mechanism.)
1880                if scope.moved_locals.contains(&local) {
1881                    continue;
1882                }
1883
1884                cfg.push(
1885                    block,
1886                    Statement::new(
1887                        source_info,
1888                        StatementKind::BackwardIncompatibleDropHint {
1889                            place: Box::new(local.into()),
1890                            reason: BackwardIncompatibleDropReason::Edition2024,
1891                        },
1892                    ),
1893                );
1894            }
1895            DropKind::Storage => {
1896                // Ordinarily, storage-dead nodes are not emitted on unwind, so we don't
1897                // need to adjust `unwind_to` on this path. However, in some specific cases
1898                // we *do* emit storage-dead nodes on the unwind path, and in that case now that
1899                // the storage-dead has completed, we need to adjust the `unwind_to` pointer
1900                // so that any future drops we emit will not register storage-dead.
1901                if storage_dead_on_unwind {
1902                    debug_assert_eq!(
1903                        unwind_drops.drop_nodes[unwind_to].data.local,
1904                        drop_data.local
1905                    );
1906                    debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.kind, drop_data.kind);
1907                    unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1908                }
1909                if let Some(idx) = dropline_to {
1910                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.local, drop_data.local);
1911                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.kind, drop_data.kind);
1912                    dropline_to = Some(coroutine_drops.drop_nodes[idx].next);
1913                }
1914                // Only temps and vars need their storage dead.
1915                assert!(local.index() > arg_count);
1916                cfg.push(block, Statement::new(source_info, StatementKind::StorageDead(local)));
1917            }
1918        }
1919    }
1920    block.unit()
1921}
1922
1923impl<'a, 'tcx: 'a> Builder<'a, 'tcx> {
1924    /// Build a drop tree for a breakable scope.
1925    ///
1926    /// If `continue_block` is `Some`, then the tree is for `continue` inside a
1927    /// loop. Otherwise this is for `break` or `return`.
1928    fn build_exit_tree(
1929        &mut self,
1930        mut drops: DropTree,
1931        else_scope: region::Scope,
1932        span: Span,
1933        continue_block: Option<BasicBlock>,
1934    ) -> Option<BlockAnd<()>> {
1935        let blocks = drops.build_mir::<ExitScopes>(&mut self.cfg, continue_block);
1936        let is_coroutine = self.coroutine.is_some();
1937
1938        // Link the exit drop tree to unwind drop tree.
1939        if drops.drop_nodes.iter().any(|drop_node| drop_node.data.kind == DropKind::Value) {
1940            let unwind_target = self.diverge_cleanup_target(else_scope, span);
1941            let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1);
1942            for (drop_idx, drop_node) in drops.drop_nodes.iter_enumerated().skip(1) {
1943                match drop_node.data.kind {
1944                    DropKind::Storage | DropKind::ForLint => {
1945                        if is_coroutine {
1946                            let unwind_drop = self
1947                                .scopes
1948                                .unwind_drops
1949                                .add_drop(drop_node.data, unwind_indices[drop_node.next]);
1950                            unwind_indices.push(unwind_drop);
1951                        } else {
1952                            unwind_indices.push(unwind_indices[drop_node.next]);
1953                        }
1954                    }
1955                    DropKind::Value => {
1956                        let unwind_drop = self
1957                            .scopes
1958                            .unwind_drops
1959                            .add_drop(drop_node.data, unwind_indices[drop_node.next]);
1960                        self.scopes.unwind_drops.add_entry_point(
1961                            blocks[drop_idx].unwrap(),
1962                            unwind_indices[drop_node.next],
1963                        );
1964                        unwind_indices.push(unwind_drop);
1965                    }
1966                }
1967            }
1968        }
1969        // Link the exit drop tree to dropline drop tree (coroutine drop path) for async drops
1970        if is_coroutine
1971            && drops.drop_nodes.iter().any(|DropNode { data, next: _ }| {
1972                data.kind == DropKind::Value && self.is_async_drop(data.local)
1973            })
1974        {
1975            let dropline_target = self.diverge_dropline_target(else_scope, span);
1976            let mut dropline_indices = IndexVec::from_elem_n(dropline_target, 1);
1977            for (drop_idx, drop_data) in drops.drop_nodes.iter_enumerated().skip(1) {
1978                let coroutine_drop = self
1979                    .scopes
1980                    .coroutine_drops
1981                    .add_drop(drop_data.data, dropline_indices[drop_data.next]);
1982                match drop_data.data.kind {
1983                    DropKind::Storage | DropKind::ForLint => {}
1984                    DropKind::Value => {
1985                        if self.is_async_drop(drop_data.data.local) {
1986                            self.scopes.coroutine_drops.add_entry_point(
1987                                blocks[drop_idx].unwrap(),
1988                                dropline_indices[drop_data.next],
1989                            );
1990                        }
1991                    }
1992                }
1993                dropline_indices.push(coroutine_drop);
1994            }
1995        }
1996        blocks[ROOT_NODE].map(BasicBlock::unit)
1997    }
1998
1999    /// Build the unwind and coroutine drop trees.
2000    pub(crate) fn build_drop_trees(&mut self) {
2001        if self.coroutine.is_some() {
2002            self.build_coroutine_drop_trees();
2003        } else {
2004            Self::build_unwind_tree(
2005                &mut self.cfg,
2006                &mut self.scopes.unwind_drops,
2007                self.fn_span,
2008                &mut None,
2009            );
2010        }
2011    }
2012
2013    fn build_coroutine_drop_trees(&mut self) {
2014        // Build the drop tree for dropping the coroutine while it's suspended.
2015        let drops = &mut self.scopes.coroutine_drops;
2016        let cfg = &mut self.cfg;
2017        let fn_span = self.fn_span;
2018        let blocks = drops.build_mir::<CoroutineDrop>(cfg, None);
2019        if let Some(root_block) = blocks[ROOT_NODE] {
2020            cfg.terminate(
2021                root_block,
2022                SourceInfo::outermost(fn_span),
2023                TerminatorKind::CoroutineDrop,
2024            );
2025        }
2026
2027        // Build the drop tree for unwinding in the normal control flow paths.
2028        let resume_block = &mut None;
2029        let unwind_drops = &mut self.scopes.unwind_drops;
2030        Self::build_unwind_tree(cfg, unwind_drops, fn_span, resume_block);
2031
2032        // Build the drop tree for unwinding when dropping a suspended
2033        // coroutine.
2034        //
2035        // This is a different tree to the standard unwind paths here to
2036        // prevent drop elaboration from creating drop flags that would have
2037        // to be captured by the coroutine. I'm not sure how important this
2038        // optimization is, but it is here.
2039        for (drop_idx, drop_node) in drops.drop_nodes.iter_enumerated() {
2040            if let DropKind::Value = drop_node.data.kind
2041                && let Some(bb) = blocks[drop_idx]
2042            {
2043                debug_assert!(drop_node.next < drops.drop_nodes.next_index());
2044                drops.entry_points.push((drop_node.next, bb));
2045            }
2046        }
2047        Self::build_unwind_tree(cfg, drops, fn_span, resume_block);
2048    }
2049
2050    fn build_unwind_tree(
2051        cfg: &mut CFG<'tcx>,
2052        drops: &mut DropTree,
2053        fn_span: Span,
2054        resume_block: &mut Option<BasicBlock>,
2055    ) {
2056        let blocks = drops.build_mir::<Unwind>(cfg, *resume_block);
2057        if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) {
2058            cfg.terminate(resume, SourceInfo::outermost(fn_span), TerminatorKind::UnwindResume);
2059
2060            *resume_block = blocks[ROOT_NODE];
2061        }
2062    }
2063}
2064
2065// DropTreeBuilder implementations.
2066
2067struct ExitScopes;
2068
2069impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes {
2070    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
2071        cfg.start_new_block()
2072    }
2073    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
2074        // There should be an existing terminator with real source info and a
2075        // dummy TerminatorKind. Replace it with a proper goto.
2076        // (The dummy is added by `break_scope` and `break_for_else`.)
2077        let term = cfg.block_data_mut(from).terminator_mut();
2078        if let TerminatorKind::UnwindResume = term.kind {
2079            term.kind = TerminatorKind::Goto { target: to };
2080        } else {
2081            span_bug!(term.source_info.span, "unexpected dummy terminator kind: {:?}", term.kind);
2082        }
2083    }
2084}
2085
2086struct CoroutineDrop;
2087
2088impl<'tcx> DropTreeBuilder<'tcx> for CoroutineDrop {
2089    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
2090        cfg.start_new_block()
2091    }
2092    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
2093        let term = cfg.block_data_mut(from).terminator_mut();
2094        if let TerminatorKind::Yield { ref mut drop, .. } = term.kind {
2095            *drop = Some(to);
2096        } else if let TerminatorKind::Drop { ref mut drop, .. } = term.kind {
2097            *drop = Some(to);
2098        } else {
2099            span_bug!(
2100                term.source_info.span,
2101                "cannot enter coroutine drop tree from {:?}",
2102                term.kind
2103            )
2104        }
2105    }
2106}
2107
2108struct Unwind;
2109
2110impl<'tcx> DropTreeBuilder<'tcx> for Unwind {
2111    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
2112        cfg.start_new_cleanup_block()
2113    }
2114    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
2115        let term = &mut cfg.block_data_mut(from).terminator_mut();
2116        match &mut term.kind {
2117            TerminatorKind::Drop { unwind, .. } => {
2118                if let UnwindAction::Cleanup(unwind) = *unwind {
2119                    let source_info = term.source_info;
2120                    cfg.terminate(unwind, source_info, TerminatorKind::Goto { target: to });
2121                } else {
2122                    *unwind = UnwindAction::Cleanup(to);
2123                }
2124            }
2125            TerminatorKind::FalseUnwind { unwind, .. }
2126            | TerminatorKind::Call { unwind, .. }
2127            | TerminatorKind::Assert { unwind, .. }
2128            | TerminatorKind::InlineAsm { unwind, .. } => {
2129                *unwind = UnwindAction::Cleanup(to);
2130            }
2131            TerminatorKind::Goto { .. }
2132            | TerminatorKind::SwitchInt { .. }
2133            | TerminatorKind::UnwindResume
2134            | TerminatorKind::UnwindTerminate(_)
2135            | TerminatorKind::Return
2136            | TerminatorKind::TailCall { .. }
2137            | TerminatorKind::Unreachable
2138            | TerminatorKind::Yield { .. }
2139            | TerminatorKind::CoroutineDrop
2140            | TerminatorKind::FalseEdge { .. } => {
2141                span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind)
2142            }
2143        }
2144    }
2145}