rustc_mir_build/builder/
scope.rs

1/*!
2Managing the scope stack. The scopes are tied to lexical scopes, so as
3we descend the THIR, we push a scope on the stack, build its
4contents, and then pop it off. Every scope is named by a
5`region::Scope`.
6
7### SEME Regions
8
9When pushing a new [Scope], we record the current point in the graph (a
10basic block); this marks the entry to the scope. We then generate more
11stuff in the control-flow graph. Whenever the scope is exited, either
12via a `break` or `return` or just by fallthrough, that marks an exit
13from the scope. Each lexical scope thus corresponds to a single-entry,
14multiple-exit (SEME) region in the control-flow graph.
15
16For now, we record the `region::Scope` to each SEME region for later reference
17(see caveat in next paragraph). This is because destruction scopes are tied to
18them. This may change in the future so that MIR lowering determines its own
19destruction scopes.
20
21### Not so SEME Regions
22
23In the course of building matches, it sometimes happens that certain code
24(namely guards) gets executed multiple times. This means that the scope lexical
25scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
26mapping is from one scope to a vector of SEME regions. Since the SEME regions
27are disjoint, the mapping is still one-to-one for the set of SEME regions that
28we're currently in.
29
30Also in matches, the scopes assigned to arms are not always even SEME regions!
31Each arm has a single region with one entry for each pattern. We manually
32manipulate the scheduled drops in this scope to avoid dropping things multiple
33times.
34
35### Drops
36
37The primary purpose for scopes is to insert drops: while building
38the contents, we also accumulate places that need to be dropped upon
39exit from each scope. This is done by calling `schedule_drop`. Once a
40drop is scheduled, whenever we branch out we will insert drops of all
41those places onto the outgoing edge. Note that we don't know the full
42set of scheduled drops up front, and so whenever we exit from the
43scope we only drop the values scheduled thus far. For example, consider
44the scope S corresponding to this loop:
45
46```
47# let cond = true;
48loop {
49    let x = ..;
50    if cond { break; }
51    let y = ..;
52}
53```
54
55When processing the `let x`, we will add one drop to the scope for
56`x`. The break will then insert a drop for `x`. When we process `let
57y`, we will add another drop (in fact, to a subscope, but let's ignore
58that for now); any later drops would also drop `y`.
59
60### Early exit
61
62There are numerous "normal" ways to early exit a scope: `break`,
63`continue`, `return` (panics are handled separately). Whenever an
64early exit occurs, the method `break_scope` is called. It is given the
65current point in execution where the early exit occurs, as well as the
66scope you want to branch to (note that all early exits from to some
67other enclosing scope). `break_scope` will record the set of drops currently
68scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops
69will be added to the CFG.
70
71Panics are handled in a similar fashion, except that the drops are added to the
72MIR once the rest of the function has finished being lowered. If a terminator
73can panic, call `diverge_from(block)` with the block containing the terminator
74`block`.
75
76### Breakable scopes
77
78In addition to the normal scope stack, we track a loop scope stack
79that contains only loops and breakable blocks. It tracks where a `break`,
80`continue` or `return` should go to.
81
82*/
83
84use std::mem;
85
86use interpret::ErrorHandled;
87use rustc_data_structures::fx::FxHashMap;
88use rustc_hir::HirId;
89use rustc_index::{IndexSlice, IndexVec};
90use rustc_middle::middle::region;
91use rustc_middle::mir::{self, *};
92use rustc_middle::thir::{AdtExpr, AdtExprBase, ArmId, ExprId, ExprKind, LintLevel};
93use rustc_middle::ty::{self, Ty, TyCtxt, TypeVisitableExt, ValTree};
94use rustc_middle::{bug, span_bug};
95use rustc_pattern_analysis::rustc::RustcPatCtxt;
96use rustc_session::lint::Level;
97use rustc_span::source_map::Spanned;
98use rustc_span::{DUMMY_SP, Span};
99use tracing::{debug, instrument};
100
101use super::matches::BuiltMatchTree;
102use crate::builder::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
103use crate::errors::{ConstContinueBadConst, ConstContinueUnknownJumpTarget};
104
105#[derive(Debug)]
106pub(crate) struct Scopes<'tcx> {
107    scopes: Vec<Scope>,
108
109    /// The current set of breakable scopes. See module comment for more details.
110    breakable_scopes: Vec<BreakableScope<'tcx>>,
111
112    const_continuable_scopes: Vec<ConstContinuableScope<'tcx>>,
113
114    /// The scope of the innermost if-then currently being lowered.
115    if_then_scope: Option<IfThenScope>,
116
117    /// Drops that need to be done on unwind paths. See the comment on
118    /// [DropTree] for more details.
119    unwind_drops: DropTree,
120
121    /// Drops that need to be done on paths to the `CoroutineDrop` terminator.
122    coroutine_drops: DropTree,
123}
124
125#[derive(Debug)]
126struct Scope {
127    /// The source scope this scope was created in.
128    source_scope: SourceScope,
129
130    /// the region span of this scope within source code.
131    region_scope: region::Scope,
132
133    /// set of places to drop when exiting this scope. This starts
134    /// out empty but grows as variables are declared during the
135    /// building process. This is a stack, so we always drop from the
136    /// end of the vector (top of the stack) first.
137    drops: Vec<DropData>,
138
139    moved_locals: Vec<Local>,
140
141    /// The drop index that will drop everything in and below this scope on an
142    /// unwind path.
143    cached_unwind_block: Option<DropIdx>,
144
145    /// The drop index that will drop everything in and below this scope on a
146    /// coroutine drop path.
147    cached_coroutine_drop_block: Option<DropIdx>,
148}
149
150#[derive(Clone, Copy, Debug)]
151struct DropData {
152    /// The `Span` where drop obligation was incurred (typically where place was
153    /// declared)
154    source_info: SourceInfo,
155
156    /// local to drop
157    local: Local,
158
159    /// Whether this is a value Drop or a StorageDead.
160    kind: DropKind,
161}
162
163#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
164pub(crate) enum DropKind {
165    Value,
166    Storage,
167    ForLint,
168}
169
170#[derive(Debug)]
171struct BreakableScope<'tcx> {
172    /// Region scope of the loop
173    region_scope: region::Scope,
174    /// The destination of the loop/block expression itself (i.e., where to put
175    /// the result of a `break` or `return` expression)
176    break_destination: Place<'tcx>,
177    /// Drops that happen on the `break`/`return` path.
178    break_drops: DropTree,
179    /// Drops that happen on the `continue` path.
180    continue_drops: Option<DropTree>,
181}
182
183#[derive(Debug)]
184struct ConstContinuableScope<'tcx> {
185    /// The scope for the `#[loop_match]` which its `#[const_continue]`s will jump to.
186    region_scope: region::Scope,
187    /// The place of the state of a `#[loop_match]`, which a `#[const_continue]` must update.
188    state_place: Place<'tcx>,
189
190    arms: Box<[ArmId]>,
191    built_match_tree: BuiltMatchTree<'tcx>,
192
193    /// Drops that happen on a `#[const_continue]`
194    const_continue_drops: DropTree,
195}
196
197#[derive(Debug)]
198struct IfThenScope {
199    /// The if-then scope or arm scope
200    region_scope: region::Scope,
201    /// Drops that happen on the `else` path.
202    else_drops: DropTree,
203}
204
205/// The target of an expression that breaks out of a scope
206#[derive(Clone, Copy, Debug)]
207pub(crate) enum BreakableTarget {
208    Continue(region::Scope),
209    Break(region::Scope),
210    Return,
211}
212
213rustc_index::newtype_index! {
214    #[orderable]
215    struct DropIdx {}
216}
217
218const ROOT_NODE: DropIdx = DropIdx::ZERO;
219
220/// A tree of drops that we have deferred lowering. It's used for:
221///
222/// * Drops on unwind paths
223/// * Drops on coroutine drop paths (when a suspended coroutine is dropped)
224/// * Drops on return and loop exit paths
225/// * Drops on the else path in an `if let` chain
226///
227/// Once no more nodes could be added to the tree, we lower it to MIR in one go
228/// in `build_mir`.
229#[derive(Debug)]
230struct DropTree {
231    /// Nodes in the drop tree, containing drop data and a link to the next node.
232    drop_nodes: IndexVec<DropIdx, DropNode>,
233    /// Map for finding the index of an existing node, given its contents.
234    existing_drops_map: FxHashMap<DropNodeKey, DropIdx>,
235    /// Edges into the `DropTree` that need to be added once it's lowered.
236    entry_points: Vec<(DropIdx, BasicBlock)>,
237}
238
239/// A single node in the drop tree.
240#[derive(Debug)]
241struct DropNode {
242    /// Info about the drop to be performed at this node in the drop tree.
243    data: DropData,
244    /// Index of the "next" drop to perform (in drop order, not declaration order).
245    next: DropIdx,
246}
247
248/// Subset of [`DropNode`] used for reverse lookup in a hash table.
249#[derive(Debug, PartialEq, Eq, Hash)]
250struct DropNodeKey {
251    next: DropIdx,
252    local: Local,
253}
254
255impl Scope {
256    /// Whether there's anything to do for the cleanup path, that is,
257    /// when unwinding through this scope. This includes destructors,
258    /// but not StorageDead statements, which don't get emitted at all
259    /// for unwinding, for several reasons:
260    ///  * clang doesn't emit llvm.lifetime.end for C++ unwinding
261    ///  * LLVM's memory dependency analysis can't handle it atm
262    ///  * polluting the cleanup MIR with StorageDead creates
263    ///    landing pads even though there's no actual destructors
264    ///  * freeing up stack space has no effect during unwinding
265    /// Note that for coroutines we do emit StorageDeads, for the
266    /// use of optimizations in the MIR coroutine transform.
267    fn needs_cleanup(&self) -> bool {
268        self.drops.iter().any(|drop| match drop.kind {
269            DropKind::Value | DropKind::ForLint => true,
270            DropKind::Storage => false,
271        })
272    }
273
274    fn invalidate_cache(&mut self) {
275        self.cached_unwind_block = None;
276        self.cached_coroutine_drop_block = None;
277    }
278}
279
280/// A trait that determined how [DropTree] creates its blocks and
281/// links to any entry nodes.
282trait DropTreeBuilder<'tcx> {
283    /// Create a new block for the tree. This should call either
284    /// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`.
285    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock;
286
287    /// Links a block outside the drop tree, `from`, to the block `to` inside
288    /// the drop tree.
289    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock);
290}
291
292impl DropTree {
293    fn new() -> Self {
294        // The root node of the tree doesn't represent a drop, but instead
295        // represents the block in the tree that should be jumped to once all
296        // of the required drops have been performed.
297        let fake_source_info = SourceInfo::outermost(DUMMY_SP);
298        let fake_data =
299            DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage };
300        let drop_nodes = IndexVec::from_raw(vec![DropNode { data: fake_data, next: DropIdx::MAX }]);
301        Self { drop_nodes, entry_points: Vec::new(), existing_drops_map: FxHashMap::default() }
302    }
303
304    /// Adds a node to the drop tree, consisting of drop data and the index of
305    /// the "next" drop (in drop order), which could be the sentinel [`ROOT_NODE`].
306    ///
307    /// If there is already an equivalent node in the tree, nothing is added, and
308    /// that node's index is returned. Otherwise, the new node's index is returned.
309    fn add_drop(&mut self, data: DropData, next: DropIdx) -> DropIdx {
310        let drop_nodes = &mut self.drop_nodes;
311        *self
312            .existing_drops_map
313            .entry(DropNodeKey { next, local: data.local })
314            // Create a new node, and also add its index to the map.
315            .or_insert_with(|| drop_nodes.push(DropNode { data, next }))
316    }
317
318    /// Registers `from` as an entry point to this drop tree, at `to`.
319    ///
320    /// During [`Self::build_mir`], `from` will be linked to the corresponding
321    /// block within the drop tree.
322    fn add_entry_point(&mut self, from: BasicBlock, to: DropIdx) {
323        debug_assert!(to < self.drop_nodes.next_index());
324        self.entry_points.push((to, from));
325    }
326
327    /// Builds the MIR for a given drop tree.
328    fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>(
329        &mut self,
330        cfg: &mut CFG<'tcx>,
331        root_node: Option<BasicBlock>,
332    ) -> IndexVec<DropIdx, Option<BasicBlock>> {
333        debug!("DropTree::build_mir(drops = {:#?})", self);
334
335        let mut blocks = self.assign_blocks::<T>(cfg, root_node);
336        self.link_blocks(cfg, &mut blocks);
337
338        blocks
339    }
340
341    /// Assign blocks for all of the drops in the drop tree that need them.
342    fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>(
343        &mut self,
344        cfg: &mut CFG<'tcx>,
345        root_node: Option<BasicBlock>,
346    ) -> IndexVec<DropIdx, Option<BasicBlock>> {
347        // StorageDead statements can share blocks with each other and also with
348        // a Drop terminator. We iterate through the drops to find which drops
349        // need their own block.
350        #[derive(Clone, Copy)]
351        enum Block {
352            // This drop is unreachable
353            None,
354            // This drop is only reachable through the `StorageDead` with the
355            // specified index.
356            Shares(DropIdx),
357            // This drop has more than one way of being reached, or it is
358            // branched to from outside the tree, or its predecessor is a
359            // `Value` drop.
360            Own,
361        }
362
363        let mut blocks = IndexVec::from_elem(None, &self.drop_nodes);
364        blocks[ROOT_NODE] = root_node;
365
366        let mut needs_block = IndexVec::from_elem(Block::None, &self.drop_nodes);
367        if root_node.is_some() {
368            // In some cases (such as drops for `continue`) the root node
369            // already has a block. In this case, make sure that we don't
370            // override it.
371            needs_block[ROOT_NODE] = Block::Own;
372        }
373
374        // Sort so that we only need to check the last value.
375        let entry_points = &mut self.entry_points;
376        entry_points.sort();
377
378        for (drop_idx, drop_node) in self.drop_nodes.iter_enumerated().rev() {
379            if entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
380                let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
381                needs_block[drop_idx] = Block::Own;
382                while entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
383                    let entry_block = entry_points.pop().unwrap().1;
384                    T::link_entry_point(cfg, entry_block, block);
385                }
386            }
387            match needs_block[drop_idx] {
388                Block::None => continue,
389                Block::Own => {
390                    blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
391                }
392                Block::Shares(pred) => {
393                    blocks[drop_idx] = blocks[pred];
394                }
395            }
396            if let DropKind::Value = drop_node.data.kind {
397                needs_block[drop_node.next] = Block::Own;
398            } else if drop_idx != ROOT_NODE {
399                match &mut needs_block[drop_node.next] {
400                    pred @ Block::None => *pred = Block::Shares(drop_idx),
401                    pred @ Block::Shares(_) => *pred = Block::Own,
402                    Block::Own => (),
403                }
404            }
405        }
406
407        debug!("assign_blocks: blocks = {:#?}", blocks);
408        assert!(entry_points.is_empty());
409
410        blocks
411    }
412
413    fn link_blocks<'tcx>(
414        &self,
415        cfg: &mut CFG<'tcx>,
416        blocks: &IndexSlice<DropIdx, Option<BasicBlock>>,
417    ) {
418        for (drop_idx, drop_node) in self.drop_nodes.iter_enumerated().rev() {
419            let Some(block) = blocks[drop_idx] else { continue };
420            match drop_node.data.kind {
421                DropKind::Value => {
422                    let terminator = TerminatorKind::Drop {
423                        target: blocks[drop_node.next].unwrap(),
424                        // The caller will handle this if needed.
425                        unwind: UnwindAction::Terminate(UnwindTerminateReason::InCleanup),
426                        place: drop_node.data.local.into(),
427                        replace: false,
428                        drop: None,
429                        async_fut: None,
430                    };
431                    cfg.terminate(block, drop_node.data.source_info, terminator);
432                }
433                DropKind::ForLint => {
434                    let stmt = Statement {
435                        source_info: drop_node.data.source_info,
436                        kind: StatementKind::BackwardIncompatibleDropHint {
437                            place: Box::new(drop_node.data.local.into()),
438                            reason: BackwardIncompatibleDropReason::Edition2024,
439                        },
440                    };
441                    cfg.push(block, stmt);
442                    let target = blocks[drop_node.next].unwrap();
443                    if target != block {
444                        // Diagnostics don't use this `Span` but debuginfo
445                        // might. Since we don't want breakpoints to be placed
446                        // here, especially when this is on an unwind path, we
447                        // use `DUMMY_SP`.
448                        let source_info =
449                            SourceInfo { span: DUMMY_SP, ..drop_node.data.source_info };
450                        let terminator = TerminatorKind::Goto { target };
451                        cfg.terminate(block, source_info, terminator);
452                    }
453                }
454                // Root nodes don't correspond to a drop.
455                DropKind::Storage if drop_idx == ROOT_NODE => {}
456                DropKind::Storage => {
457                    let stmt = Statement {
458                        source_info: drop_node.data.source_info,
459                        kind: StatementKind::StorageDead(drop_node.data.local),
460                    };
461                    cfg.push(block, stmt);
462                    let target = blocks[drop_node.next].unwrap();
463                    if target != block {
464                        // Diagnostics don't use this `Span` but debuginfo
465                        // might. Since we don't want breakpoints to be placed
466                        // here, especially when this is on an unwind path, we
467                        // use `DUMMY_SP`.
468                        let source_info =
469                            SourceInfo { span: DUMMY_SP, ..drop_node.data.source_info };
470                        let terminator = TerminatorKind::Goto { target };
471                        cfg.terminate(block, source_info, terminator);
472                    }
473                }
474            }
475        }
476    }
477}
478
479impl<'tcx> Scopes<'tcx> {
480    pub(crate) fn new() -> Self {
481        Self {
482            scopes: Vec::new(),
483            breakable_scopes: Vec::new(),
484            const_continuable_scopes: Vec::new(),
485            if_then_scope: None,
486            unwind_drops: DropTree::new(),
487            coroutine_drops: DropTree::new(),
488        }
489    }
490
491    fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
492        debug!("push_scope({:?})", region_scope);
493        self.scopes.push(Scope {
494            source_scope: vis_scope,
495            region_scope: region_scope.0,
496            drops: vec![],
497            moved_locals: vec![],
498            cached_unwind_block: None,
499            cached_coroutine_drop_block: None,
500        });
501    }
502
503    fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope {
504        let scope = self.scopes.pop().unwrap();
505        assert_eq!(scope.region_scope, region_scope.0);
506        scope
507    }
508
509    fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize {
510        self.scopes
511            .iter()
512            .rposition(|scope| scope.region_scope == region_scope)
513            .unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope))
514    }
515
516    /// Returns the topmost active scope, which is known to be alive until
517    /// the next scope expression.
518    fn topmost(&self) -> region::Scope {
519        self.scopes.last().expect("topmost_scope: no scopes present").region_scope
520    }
521}
522
523impl<'a, 'tcx> Builder<'a, 'tcx> {
524    // Adding and removing scopes
525    // ==========================
526
527    ///  Start a breakable scope, which tracks where `continue`, `break` and
528    ///  `return` should branch to.
529    pub(crate) fn in_breakable_scope<F>(
530        &mut self,
531        loop_block: Option<BasicBlock>,
532        break_destination: Place<'tcx>,
533        span: Span,
534        f: F,
535    ) -> BlockAnd<()>
536    where
537        F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,
538    {
539        let region_scope = self.scopes.topmost();
540        let scope = BreakableScope {
541            region_scope,
542            break_destination,
543            break_drops: DropTree::new(),
544            continue_drops: loop_block.map(|_| DropTree::new()),
545        };
546        self.scopes.breakable_scopes.push(scope);
547        let normal_exit_block = f(self);
548        let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
549        assert!(breakable_scope.region_scope == region_scope);
550        let break_block =
551            self.build_exit_tree(breakable_scope.break_drops, region_scope, span, None);
552        if let Some(drops) = breakable_scope.continue_drops {
553            self.build_exit_tree(drops, region_scope, span, loop_block);
554        }
555        match (normal_exit_block, break_block) {
556            (Some(block), None) | (None, Some(block)) => block,
557            (None, None) => self.cfg.start_new_block().unit(),
558            (Some(normal_block), Some(exit_block)) => {
559                let target = self.cfg.start_new_block();
560                let source_info = self.source_info(span);
561                self.cfg.terminate(
562                    normal_block.into_block(),
563                    source_info,
564                    TerminatorKind::Goto { target },
565                );
566                self.cfg.terminate(
567                    exit_block.into_block(),
568                    source_info,
569                    TerminatorKind::Goto { target },
570                );
571                target.unit()
572            }
573        }
574    }
575
576    /// Start a const-continuable scope, which tracks where `#[const_continue] break` should
577    /// branch to.
578    pub(crate) fn in_const_continuable_scope<F>(
579        &mut self,
580        arms: Box<[ArmId]>,
581        built_match_tree: BuiltMatchTree<'tcx>,
582        state_place: Place<'tcx>,
583        span: Span,
584        f: F,
585    ) -> BlockAnd<()>
586    where
587        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
588    {
589        let region_scope = self.scopes.topmost();
590        let scope = ConstContinuableScope {
591            region_scope,
592            state_place,
593            const_continue_drops: DropTree::new(),
594            arms,
595            built_match_tree,
596        };
597        self.scopes.const_continuable_scopes.push(scope);
598        let normal_exit_block = f(self);
599        let const_continue_scope = self.scopes.const_continuable_scopes.pop().unwrap();
600        assert!(const_continue_scope.region_scope == region_scope);
601
602        let break_block = self.build_exit_tree(
603            const_continue_scope.const_continue_drops,
604            region_scope,
605            span,
606            None,
607        );
608
609        match (normal_exit_block, break_block) {
610            (block, None) => block,
611            (normal_block, Some(exit_block)) => {
612                let target = self.cfg.start_new_block();
613                let source_info = self.source_info(span);
614                self.cfg.terminate(
615                    normal_block.into_block(),
616                    source_info,
617                    TerminatorKind::Goto { target },
618                );
619                self.cfg.terminate(
620                    exit_block.into_block(),
621                    source_info,
622                    TerminatorKind::Goto { target },
623                );
624                target.unit()
625            }
626        }
627    }
628
629    /// Start an if-then scope which tracks drop for `if` expressions and `if`
630    /// guards.
631    ///
632    /// For an if-let chain:
633    ///
634    /// if let Some(x) = a && let Some(y) = b && let Some(z) = c { ... }
635    ///
636    /// There are three possible ways the condition can be false and we may have
637    /// to drop `x`, `x` and `y`, or neither depending on which binding fails.
638    /// To handle this correctly we use a `DropTree` in a similar way to a
639    /// `loop` expression and 'break' out on all of the 'else' paths.
640    ///
641    /// Notes:
642    /// - We don't need to keep a stack of scopes in the `Builder` because the
643    ///   'else' paths will only leave the innermost scope.
644    /// - This is also used for match guards.
645    pub(crate) fn in_if_then_scope<F>(
646        &mut self,
647        region_scope: region::Scope,
648        span: Span,
649        f: F,
650    ) -> (BasicBlock, BasicBlock)
651    where
652        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
653    {
654        let scope = IfThenScope { region_scope, else_drops: DropTree::new() };
655        let previous_scope = mem::replace(&mut self.scopes.if_then_scope, Some(scope));
656
657        let then_block = f(self).into_block();
658
659        let if_then_scope = mem::replace(&mut self.scopes.if_then_scope, previous_scope).unwrap();
660        assert!(if_then_scope.region_scope == region_scope);
661
662        let else_block =
663            self.build_exit_tree(if_then_scope.else_drops, region_scope, span, None).map_or_else(
664                || self.cfg.start_new_block(),
665                |else_block_and| else_block_and.into_block(),
666            );
667
668        (then_block, else_block)
669    }
670
671    /// Convenience wrapper that pushes a scope and then executes `f`
672    /// to build its contents, popping the scope afterwards.
673    #[instrument(skip(self, f), level = "debug")]
674    pub(crate) fn in_scope<F, R>(
675        &mut self,
676        region_scope: (region::Scope, SourceInfo),
677        lint_level: LintLevel,
678        f: F,
679    ) -> BlockAnd<R>
680    where
681        F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
682    {
683        let source_scope = self.source_scope;
684        if let LintLevel::Explicit(current_hir_id) = lint_level {
685            let parent_id =
686                self.source_scopes[source_scope].local_data.as_ref().unwrap_crate_local().lint_root;
687            self.maybe_new_source_scope(region_scope.1.span, current_hir_id, parent_id);
688        }
689        self.push_scope(region_scope);
690        let mut block;
691        let rv = unpack!(block = f(self));
692        block = self.pop_scope(region_scope, block).into_block();
693        self.source_scope = source_scope;
694        debug!(?block);
695        block.and(rv)
696    }
697
698    /// Push a scope onto the stack. You can then build code in this
699    /// scope and call `pop_scope` afterwards. Note that these two
700    /// calls must be paired; using `in_scope` as a convenience
701    /// wrapper maybe preferable.
702    pub(crate) fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
703        self.scopes.push_scope(region_scope, self.source_scope);
704    }
705
706    /// Pops a scope, which should have region scope `region_scope`,
707    /// adding any drops onto the end of `block` that are needed.
708    /// This must match 1-to-1 with `push_scope`.
709    pub(crate) fn pop_scope(
710        &mut self,
711        region_scope: (region::Scope, SourceInfo),
712        mut block: BasicBlock,
713    ) -> BlockAnd<()> {
714        debug!("pop_scope({:?}, {:?})", region_scope, block);
715
716        block = self.leave_top_scope(block);
717
718        self.scopes.pop_scope(region_scope);
719
720        block.unit()
721    }
722
723    /// Sets up the drops for breaking from `block` to `target`.
724    pub(crate) fn break_scope(
725        &mut self,
726        mut block: BasicBlock,
727        value: Option<ExprId>,
728        target: BreakableTarget,
729        source_info: SourceInfo,
730    ) -> BlockAnd<()> {
731        let span = source_info.span;
732
733        let get_scope_index = |scope: region::Scope| {
734            // find the loop-scope by its `region::Scope`.
735            self.scopes
736                .breakable_scopes
737                .iter()
738                .rposition(|breakable_scope| breakable_scope.region_scope == scope)
739                .unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
740        };
741        let (break_index, destination) = match target {
742            BreakableTarget::Return => {
743                let scope = &self.scopes.breakable_scopes[0];
744                if scope.break_destination != Place::return_place() {
745                    span_bug!(span, "`return` in item with no return scope");
746                }
747                (0, Some(scope.break_destination))
748            }
749            BreakableTarget::Break(scope) => {
750                let break_index = get_scope_index(scope);
751                let scope = &self.scopes.breakable_scopes[break_index];
752                (break_index, Some(scope.break_destination))
753            }
754            BreakableTarget::Continue(scope) => {
755                let break_index = get_scope_index(scope);
756                (break_index, None)
757            }
758        };
759
760        match (destination, value) {
761            (Some(destination), Some(value)) => {
762                debug!("stmt_expr Break val block_context.push(SubExpr)");
763                self.block_context.push(BlockFrame::SubExpr);
764                block = self.expr_into_dest(destination, block, value).into_block();
765                self.block_context.pop();
766            }
767            (Some(destination), None) => {
768                self.cfg.push_assign_unit(block, source_info, destination, self.tcx)
769            }
770            (None, Some(_)) => {
771                panic!("`return`, `become` and `break` with value and must have a destination")
772            }
773            (None, None) => {
774                if self.tcx.sess.instrument_coverage() {
775                    // Normally we wouldn't build any MIR in this case, but that makes it
776                    // harder for coverage instrumentation to extract a relevant span for
777                    // `continue` expressions. So here we inject a dummy statement with the
778                    // desired span.
779                    self.cfg.push_coverage_span_marker(block, source_info);
780                }
781            }
782        }
783
784        let region_scope = self.scopes.breakable_scopes[break_index].region_scope;
785        let scope_index = self.scopes.scope_index(region_scope, span);
786        let drops = if destination.is_some() {
787            &mut self.scopes.breakable_scopes[break_index].break_drops
788        } else {
789            let Some(drops) = self.scopes.breakable_scopes[break_index].continue_drops.as_mut()
790            else {
791                self.tcx.dcx().span_delayed_bug(
792                    source_info.span,
793                    "unlabelled `continue` within labelled block",
794                );
795                self.cfg.terminate(block, source_info, TerminatorKind::Unreachable);
796
797                return self.cfg.start_new_block().unit();
798            };
799            drops
800        };
801
802        let mut drop_idx = ROOT_NODE;
803        for scope in &self.scopes.scopes[scope_index + 1..] {
804            for drop in &scope.drops {
805                drop_idx = drops.add_drop(*drop, drop_idx);
806            }
807        }
808        drops.add_entry_point(block, drop_idx);
809
810        // `build_drop_trees` doesn't have access to our source_info, so we
811        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
812        // because MIR type checking will panic if it hasn't been overwritten.
813        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
814        self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
815
816        self.cfg.start_new_block().unit()
817    }
818
819    /// Based on `FunctionCx::eval_unevaluated_mir_constant_to_valtree`.
820    fn eval_unevaluated_mir_constant_to_valtree(
821        &self,
822        constant: ConstOperand<'tcx>,
823    ) -> Result<(ty::ValTree<'tcx>, Ty<'tcx>), interpret::ErrorHandled> {
824        assert!(!constant.const_.ty().has_param());
825        let (uv, ty) = match constant.const_ {
826            mir::Const::Unevaluated(uv, ty) => (uv.shrink(), ty),
827            mir::Const::Ty(_, c) => match c.kind() {
828                // A constant that came from a const generic but was then used as an argument to
829                // old-style simd_shuffle (passing as argument instead of as a generic param).
830                ty::ConstKind::Value(cv) => return Ok((cv.valtree, cv.ty)),
831                other => span_bug!(constant.span, "{other:#?}"),
832            },
833            mir::Const::Val(mir::ConstValue::Scalar(mir::interpret::Scalar::Int(val)), ty) => {
834                return Ok((ValTree::from_scalar_int(self.tcx, val), ty));
835            }
836            // We should never encounter `Const::Val` unless MIR opts (like const prop) evaluate
837            // a constant and write that value back into `Operand`s. This could happen, but is
838            // unlikely. Also: all users of `simd_shuffle` are on unstable and already need to take
839            // a lot of care around intrinsics. For an issue to happen here, it would require a
840            // macro expanding to a `simd_shuffle` call without wrapping the constant argument in a
841            // `const {}` block, but the user pass through arbitrary expressions.
842
843            // FIXME(oli-obk): Replace the magic const generic argument of `simd_shuffle` with a
844            // real const generic, and get rid of this entire function.
845            other => span_bug!(constant.span, "{other:#?}"),
846        };
847
848        match self.tcx.const_eval_resolve_for_typeck(self.typing_env(), uv, constant.span) {
849            Ok(Ok(valtree)) => Ok((valtree, ty)),
850            Ok(Err(ty)) => span_bug!(constant.span, "could not convert {ty:?} to a valtree"),
851            Err(e) => Err(e),
852        }
853    }
854
855    /// Sets up the drops for jumping from `block` to `scope`.
856    pub(crate) fn break_const_continuable_scope(
857        &mut self,
858        mut block: BasicBlock,
859        value: ExprId,
860        scope: region::Scope,
861        source_info: SourceInfo,
862    ) -> BlockAnd<()> {
863        let span = source_info.span;
864
865        // A break can only break out of a scope, so the value should be a scope.
866        let rustc_middle::thir::ExprKind::Scope { value, .. } = self.thir[value].kind else {
867            span_bug!(span, "break value must be a scope")
868        };
869
870        let constant = match &self.thir[value].kind {
871            ExprKind::Adt(box AdtExpr { variant_index, fields, base, .. }) => {
872                assert!(matches!(base, AdtExprBase::None));
873                assert!(fields.is_empty());
874                ConstOperand {
875                    span: self.thir[value].span,
876                    user_ty: None,
877                    const_: Const::Ty(
878                        self.thir[value].ty,
879                        ty::Const::new_value(
880                            self.tcx,
881                            ValTree::from_branches(
882                                self.tcx,
883                                [ValTree::from_scalar_int(self.tcx, variant_index.as_u32().into())],
884                            ),
885                            self.thir[value].ty,
886                        ),
887                    ),
888                }
889            }
890            _ => self.as_constant(&self.thir[value]),
891        };
892
893        let break_index = self
894            .scopes
895            .const_continuable_scopes
896            .iter()
897            .rposition(|const_continuable_scope| const_continuable_scope.region_scope == scope)
898            .unwrap_or_else(|| span_bug!(span, "no enclosing const-continuable scope found"));
899
900        let scope = &self.scopes.const_continuable_scopes[break_index];
901
902        let state_decl = &self.local_decls[scope.state_place.as_local().unwrap()];
903        let state_ty = state_decl.ty;
904        let (discriminant_ty, rvalue) = match state_ty.kind() {
905            ty::Adt(adt_def, _) if adt_def.is_enum() => {
906                (state_ty.discriminant_ty(self.tcx), Rvalue::Discriminant(scope.state_place))
907            }
908            ty::Uint(_) | ty::Int(_) | ty::Float(_) | ty::Bool | ty::Char => {
909                (state_ty, Rvalue::Use(Operand::Copy(scope.state_place)))
910            }
911            _ => span_bug!(state_decl.source_info.span, "unsupported #[loop_match] state"),
912        };
913
914        // The `PatCtxt` is normally used in pattern exhaustiveness checking, but reused
915        // here because it performs normalization and const evaluation.
916        let dropless_arena = rustc_arena::DroplessArena::default();
917        let typeck_results = self.tcx.typeck(self.def_id);
918        let cx = RustcPatCtxt {
919            tcx: self.tcx,
920            typeck_results,
921            module: self.tcx.parent_module(self.hir_id).to_def_id(),
922            // FIXME(#132279): We're in a body, should handle opaques.
923            typing_env: rustc_middle::ty::TypingEnv::non_body_analysis(self.tcx, self.def_id),
924            dropless_arena: &dropless_arena,
925            match_lint_level: self.hir_id,
926            whole_match_span: Some(rustc_span::Span::default()),
927            scrut_span: rustc_span::Span::default(),
928            refutable: true,
929            known_valid_scrutinee: true,
930        };
931
932        let valtree = match self.eval_unevaluated_mir_constant_to_valtree(constant) {
933            Ok((valtree, ty)) => {
934                // Defensively check that the type is monomorphic.
935                assert!(!ty.has_param());
936
937                valtree
938            }
939            Err(ErrorHandled::Reported(..)) => return self.cfg.start_new_block().unit(),
940            Err(ErrorHandled::TooGeneric(_)) => {
941                self.tcx.dcx().emit_fatal(ConstContinueBadConst { span: constant.span });
942            }
943        };
944
945        let Some(real_target) =
946            self.static_pattern_match(&cx, valtree, &*scope.arms, &scope.built_match_tree)
947        else {
948            self.tcx.dcx().emit_fatal(ConstContinueUnknownJumpTarget { span })
949        };
950
951        self.block_context.push(BlockFrame::SubExpr);
952        let state_place = scope.state_place;
953        block = self.expr_into_dest(state_place, block, value).into_block();
954        self.block_context.pop();
955
956        let discr = self.temp(discriminant_ty, source_info.span);
957        let scope_index = self
958            .scopes
959            .scope_index(self.scopes.const_continuable_scopes[break_index].region_scope, span);
960        let scope = &mut self.scopes.const_continuable_scopes[break_index];
961        self.cfg.push_assign(block, source_info, discr, rvalue);
962        let drop_and_continue_block = self.cfg.start_new_block();
963        let imaginary_target = self.cfg.start_new_block();
964        self.cfg.terminate(
965            block,
966            source_info,
967            TerminatorKind::FalseEdge { real_target: drop_and_continue_block, imaginary_target },
968        );
969
970        let drops = &mut scope.const_continue_drops;
971
972        let drop_idx = self.scopes.scopes[scope_index + 1..]
973            .iter()
974            .flat_map(|scope| &scope.drops)
975            .fold(ROOT_NODE, |drop_idx, &drop| drops.add_drop(drop, drop_idx));
976
977        drops.add_entry_point(imaginary_target, drop_idx);
978
979        self.cfg.terminate(imaginary_target, source_info, TerminatorKind::UnwindResume);
980
981        let region_scope = scope.region_scope;
982        let scope_index = self.scopes.scope_index(region_scope, span);
983        let mut drops = DropTree::new();
984
985        let drop_idx = self.scopes.scopes[scope_index + 1..]
986            .iter()
987            .flat_map(|scope| &scope.drops)
988            .fold(ROOT_NODE, |drop_idx, &drop| drops.add_drop(drop, drop_idx));
989
990        drops.add_entry_point(drop_and_continue_block, drop_idx);
991
992        // `build_drop_trees` doesn't have access to our source_info, so we
993        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
994        // because MIR type checking will panic if it hasn't been overwritten.
995        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
996        self.cfg.terminate(drop_and_continue_block, source_info, TerminatorKind::UnwindResume);
997
998        self.build_exit_tree(drops, region_scope, span, Some(real_target));
999
1000        return self.cfg.start_new_block().unit();
1001    }
1002
1003    /// Sets up the drops for breaking from `block` due to an `if` condition
1004    /// that turned out to be false.
1005    ///
1006    /// Must be called in the context of [`Builder::in_if_then_scope`], so that
1007    /// there is an if-then scope to tell us what the target scope is.
1008    pub(crate) fn break_for_else(&mut self, block: BasicBlock, source_info: SourceInfo) {
1009        let if_then_scope = self
1010            .scopes
1011            .if_then_scope
1012            .as_ref()
1013            .unwrap_or_else(|| span_bug!(source_info.span, "no if-then scope found"));
1014
1015        let target = if_then_scope.region_scope;
1016        let scope_index = self.scopes.scope_index(target, source_info.span);
1017
1018        // Upgrade `if_then_scope` to `&mut`.
1019        let if_then_scope = self.scopes.if_then_scope.as_mut().expect("upgrading & to &mut");
1020
1021        let mut drop_idx = ROOT_NODE;
1022        let drops = &mut if_then_scope.else_drops;
1023        for scope in &self.scopes.scopes[scope_index + 1..] {
1024            for drop in &scope.drops {
1025                drop_idx = drops.add_drop(*drop, drop_idx);
1026            }
1027        }
1028        drops.add_entry_point(block, drop_idx);
1029
1030        // `build_drop_trees` doesn't have access to our source_info, so we
1031        // create a dummy terminator now. `TerminatorKind::UnwindResume` is used
1032        // because MIR type checking will panic if it hasn't been overwritten.
1033        // (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
1034        self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
1035    }
1036
1037    /// Sets up the drops for explicit tail calls.
1038    ///
1039    /// Unlike other kinds of early exits, tail calls do not go through the drop tree.
1040    /// Instead, all scheduled drops are immediately added to the CFG.
1041    pub(crate) fn break_for_tail_call(
1042        &mut self,
1043        mut block: BasicBlock,
1044        args: &[Spanned<Operand<'tcx>>],
1045        source_info: SourceInfo,
1046    ) -> BlockAnd<()> {
1047        let arg_drops: Vec<_> = args
1048            .iter()
1049            .rev()
1050            .filter_map(|arg| match &arg.node {
1051                Operand::Copy(_) => bug!("copy op in tail call args"),
1052                Operand::Move(place) => {
1053                    let local =
1054                        place.as_local().unwrap_or_else(|| bug!("projection in tail call args"));
1055
1056                    if !self.local_decls[local].ty.needs_drop(self.tcx, self.typing_env()) {
1057                        return None;
1058                    }
1059
1060                    Some(DropData { source_info, local, kind: DropKind::Value })
1061                }
1062                Operand::Constant(_) => None,
1063            })
1064            .collect();
1065
1066        let mut unwind_to = self.diverge_cleanup_target(
1067            self.scopes.scopes.iter().rev().nth(1).unwrap().region_scope,
1068            DUMMY_SP,
1069        );
1070        let typing_env = self.typing_env();
1071        let unwind_drops = &mut self.scopes.unwind_drops;
1072
1073        // the innermost scope contains only the destructors for the tail call arguments
1074        // we only want to drop these in case of a panic, so we skip it
1075        for scope in self.scopes.scopes[1..].iter().rev().skip(1) {
1076            // FIXME(explicit_tail_calls) code duplication with `build_scope_drops`
1077            for drop_data in scope.drops.iter().rev() {
1078                let source_info = drop_data.source_info;
1079                let local = drop_data.local;
1080
1081                if !self.local_decls[local].ty.needs_drop(self.tcx, typing_env) {
1082                    continue;
1083                }
1084
1085                match drop_data.kind {
1086                    DropKind::Value => {
1087                        // `unwind_to` should drop the value that we're about to
1088                        // schedule. If dropping this value panics, then we continue
1089                        // with the *next* value on the unwind path.
1090                        debug_assert_eq!(
1091                            unwind_drops.drop_nodes[unwind_to].data.local,
1092                            drop_data.local
1093                        );
1094                        debug_assert_eq!(
1095                            unwind_drops.drop_nodes[unwind_to].data.kind,
1096                            drop_data.kind
1097                        );
1098                        unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1099
1100                        let mut unwind_entry_point = unwind_to;
1101
1102                        // the tail call arguments must be dropped if any of these drops panic
1103                        for drop in arg_drops.iter().copied() {
1104                            unwind_entry_point = unwind_drops.add_drop(drop, unwind_entry_point);
1105                        }
1106
1107                        unwind_drops.add_entry_point(block, unwind_entry_point);
1108
1109                        let next = self.cfg.start_new_block();
1110                        self.cfg.terminate(
1111                            block,
1112                            source_info,
1113                            TerminatorKind::Drop {
1114                                place: local.into(),
1115                                target: next,
1116                                unwind: UnwindAction::Continue,
1117                                replace: false,
1118                                drop: None,
1119                                async_fut: None,
1120                            },
1121                        );
1122                        block = next;
1123                    }
1124                    DropKind::ForLint => {
1125                        self.cfg.push(
1126                            block,
1127                            Statement {
1128                                source_info,
1129                                kind: StatementKind::BackwardIncompatibleDropHint {
1130                                    place: Box::new(local.into()),
1131                                    reason: BackwardIncompatibleDropReason::Edition2024,
1132                                },
1133                            },
1134                        );
1135                    }
1136                    DropKind::Storage => {
1137                        // Only temps and vars need their storage dead.
1138                        assert!(local.index() > self.arg_count);
1139                        self.cfg.push(
1140                            block,
1141                            Statement { source_info, kind: StatementKind::StorageDead(local) },
1142                        );
1143                    }
1144                }
1145            }
1146        }
1147
1148        block.unit()
1149    }
1150
1151    fn is_async_drop_impl(
1152        tcx: TyCtxt<'tcx>,
1153        local_decls: &IndexVec<Local, LocalDecl<'tcx>>,
1154        typing_env: ty::TypingEnv<'tcx>,
1155        local: Local,
1156    ) -> bool {
1157        let ty = local_decls[local].ty;
1158        if ty.is_async_drop(tcx, typing_env) || ty.is_coroutine() {
1159            return true;
1160        }
1161        ty.needs_async_drop(tcx, typing_env)
1162    }
1163    fn is_async_drop(&self, local: Local) -> bool {
1164        Self::is_async_drop_impl(self.tcx, &self.local_decls, self.typing_env(), local)
1165    }
1166
1167    fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock {
1168        // If we are emitting a `drop` statement, we need to have the cached
1169        // diverge cleanup pads ready in case that drop panics.
1170        let needs_cleanup = self.scopes.scopes.last().is_some_and(|scope| scope.needs_cleanup());
1171        let is_coroutine = self.coroutine.is_some();
1172        let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX };
1173
1174        let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
1175        let has_async_drops = is_coroutine
1176            && scope.drops.iter().any(|v| v.kind == DropKind::Value && self.is_async_drop(v.local));
1177        let dropline_to = if has_async_drops { Some(self.diverge_dropline()) } else { None };
1178        let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
1179        let typing_env = self.typing_env();
1180        build_scope_drops(
1181            &mut self.cfg,
1182            &mut self.scopes.unwind_drops,
1183            &mut self.scopes.coroutine_drops,
1184            scope,
1185            block,
1186            unwind_to,
1187            dropline_to,
1188            is_coroutine && needs_cleanup,
1189            self.arg_count,
1190            |v: Local| Self::is_async_drop_impl(self.tcx, &self.local_decls, typing_env, v),
1191        )
1192        .into_block()
1193    }
1194
1195    /// Possibly creates a new source scope if `current_root` and `parent_root`
1196    /// are different, or if -Zmaximal-hir-to-mir-coverage is enabled.
1197    pub(crate) fn maybe_new_source_scope(
1198        &mut self,
1199        span: Span,
1200        current_id: HirId,
1201        parent_id: HirId,
1202    ) {
1203        let (current_root, parent_root) =
1204            if self.tcx.sess.opts.unstable_opts.maximal_hir_to_mir_coverage {
1205                // Some consumers of rustc need to map MIR locations back to HIR nodes. Currently
1206                // the only part of rustc that tracks MIR -> HIR is the
1207                // `SourceScopeLocalData::lint_root` field that tracks lint levels for MIR
1208                // locations. Normally the number of source scopes is limited to the set of nodes
1209                // with lint annotations. The -Zmaximal-hir-to-mir-coverage flag changes this
1210                // behavior to maximize the number of source scopes, increasing the granularity of
1211                // the MIR->HIR mapping.
1212                (current_id, parent_id)
1213            } else {
1214                // Use `maybe_lint_level_root_bounded` to avoid adding Hir dependencies on our
1215                // parents. We estimate the true lint roots here to avoid creating a lot of source
1216                // scopes.
1217                (
1218                    self.maybe_lint_level_root_bounded(current_id),
1219                    if parent_id == self.hir_id {
1220                        parent_id // this is very common
1221                    } else {
1222                        self.maybe_lint_level_root_bounded(parent_id)
1223                    },
1224                )
1225            };
1226
1227        if current_root != parent_root {
1228            let lint_level = LintLevel::Explicit(current_root);
1229            self.source_scope = self.new_source_scope(span, lint_level);
1230        }
1231    }
1232
1233    /// Walks upwards from `orig_id` to find a node which might change lint levels with attributes.
1234    /// It stops at `self.hir_id` and just returns it if reached.
1235    fn maybe_lint_level_root_bounded(&mut self, orig_id: HirId) -> HirId {
1236        // This assertion lets us just store `ItemLocalId` in the cache, rather
1237        // than the full `HirId`.
1238        assert_eq!(orig_id.owner, self.hir_id.owner);
1239
1240        let mut id = orig_id;
1241        loop {
1242            if id == self.hir_id {
1243                // This is a moderately common case, mostly hit for previously unseen nodes.
1244                break;
1245            }
1246
1247            if self.tcx.hir_attrs(id).iter().any(|attr| Level::from_attr(attr).is_some()) {
1248                // This is a rare case. It's for a node path that doesn't reach the root due to an
1249                // intervening lint level attribute. This result doesn't get cached.
1250                return id;
1251            }
1252
1253            let next = self.tcx.parent_hir_id(id);
1254            if next == id {
1255                bug!("lint traversal reached the root of the crate");
1256            }
1257            id = next;
1258
1259            // This lookup is just an optimization; it can be removed without affecting
1260            // functionality. It might seem strange to see this at the end of this loop, but the
1261            // `orig_id` passed in to this function is almost always previously unseen, for which a
1262            // lookup will be a miss. So we only do lookups for nodes up the parent chain, where
1263            // cache lookups have a very high hit rate.
1264            if self.lint_level_roots_cache.contains(id.local_id) {
1265                break;
1266            }
1267        }
1268
1269        // `orig_id` traced to `self_id`; record this fact. If `orig_id` is a leaf node it will
1270        // rarely (never?) subsequently be searched for, but it's hard to know if that is the case.
1271        // The performance wins from the cache all come from caching non-leaf nodes.
1272        self.lint_level_roots_cache.insert(orig_id.local_id);
1273        self.hir_id
1274    }
1275
1276    /// Creates a new source scope, nested in the current one.
1277    pub(crate) fn new_source_scope(&mut self, span: Span, lint_level: LintLevel) -> SourceScope {
1278        let parent = self.source_scope;
1279        debug!(
1280            "new_source_scope({:?}, {:?}) - parent({:?})={:?}",
1281            span,
1282            lint_level,
1283            parent,
1284            self.source_scopes.get(parent)
1285        );
1286        let scope_local_data = SourceScopeLocalData {
1287            lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
1288                lint_root
1289            } else {
1290                self.source_scopes[parent].local_data.as_ref().unwrap_crate_local().lint_root
1291            },
1292        };
1293        self.source_scopes.push(SourceScopeData {
1294            span,
1295            parent_scope: Some(parent),
1296            inlined: None,
1297            inlined_parent_scope: None,
1298            local_data: ClearCrossCrate::Set(scope_local_data),
1299        })
1300    }
1301
1302    /// Given a span and the current source scope, make a SourceInfo.
1303    pub(crate) fn source_info(&self, span: Span) -> SourceInfo {
1304        SourceInfo { span, scope: self.source_scope }
1305    }
1306
1307    // Finding scopes
1308    // ==============
1309
1310    /// Returns the scope that we should use as the lifetime of an
1311    /// operand. Basically, an operand must live until it is consumed.
1312    /// This is similar to, but not quite the same as, the temporary
1313    /// scope (which can be larger or smaller).
1314    ///
1315    /// Consider:
1316    /// ```ignore (illustrative)
1317    /// let x = foo(bar(X, Y));
1318    /// ```
1319    /// We wish to pop the storage for X and Y after `bar()` is
1320    /// called, not after the whole `let` is completed.
1321    ///
1322    /// As another example, if the second argument diverges:
1323    /// ```ignore (illustrative)
1324    /// foo(Box::new(2), panic!())
1325    /// ```
1326    /// We would allocate the box but then free it on the unwinding
1327    /// path; we would also emit a free on the 'success' path from
1328    /// panic, but that will turn out to be removed as dead-code.
1329    pub(crate) fn local_scope(&self) -> region::Scope {
1330        self.scopes.topmost()
1331    }
1332
1333    // Scheduling drops
1334    // ================
1335
1336    pub(crate) fn schedule_drop_storage_and_value(
1337        &mut self,
1338        span: Span,
1339        region_scope: region::Scope,
1340        local: Local,
1341    ) {
1342        self.schedule_drop(span, region_scope, local, DropKind::Storage);
1343        self.schedule_drop(span, region_scope, local, DropKind::Value);
1344    }
1345
1346    /// Indicates that `place` should be dropped on exit from `region_scope`.
1347    ///
1348    /// When called with `DropKind::Storage`, `place` shouldn't be the return
1349    /// place, or a function parameter.
1350    pub(crate) fn schedule_drop(
1351        &mut self,
1352        span: Span,
1353        region_scope: region::Scope,
1354        local: Local,
1355        drop_kind: DropKind,
1356    ) {
1357        let needs_drop = match drop_kind {
1358            DropKind::Value | DropKind::ForLint => {
1359                if !self.local_decls[local].ty.needs_drop(self.tcx, self.typing_env()) {
1360                    return;
1361                }
1362                true
1363            }
1364            DropKind::Storage => {
1365                if local.index() <= self.arg_count {
1366                    span_bug!(
1367                        span,
1368                        "`schedule_drop` called with body argument {:?} \
1369                        but its storage does not require a drop",
1370                        local,
1371                    )
1372                }
1373                false
1374            }
1375        };
1376
1377        // When building drops, we try to cache chains of drops to reduce the
1378        // number of `DropTree::add_drop` calls. This, however, means that
1379        // whenever we add a drop into a scope which already had some entries
1380        // in the drop tree built (and thus, cached) for it, we must invalidate
1381        // all caches which might branch into the scope which had a drop just
1382        // added to it. This is necessary, because otherwise some other code
1383        // might use the cache to branch into already built chain of drops,
1384        // essentially ignoring the newly added drop.
1385        //
1386        // For example consider there’s two scopes with a drop in each. These
1387        // are built and thus the caches are filled:
1388        //
1389        // +--------------------------------------------------------+
1390        // | +---------------------------------+                    |
1391        // | | +--------+     +-------------+  |  +---------------+ |
1392        // | | | return | <-+ | drop(outer) | <-+ |  drop(middle) | |
1393        // | | +--------+     +-------------+  |  +---------------+ |
1394        // | +------------|outer_scope cache|--+                    |
1395        // +------------------------------|middle_scope cache|------+
1396        //
1397        // Now, a new, innermost scope is added along with a new drop into
1398        // both innermost and outermost scopes:
1399        //
1400        // +------------------------------------------------------------+
1401        // | +----------------------------------+                       |
1402        // | | +--------+      +-------------+  |   +---------------+   | +-------------+
1403        // | | | return | <+   | drop(new)   | <-+  |  drop(middle) | <--+| drop(inner) |
1404        // | | +--------+  |   | drop(outer) |  |   +---------------+   | +-------------+
1405        // | |             +-+ +-------------+  |                       |
1406        // | +---|invalid outer_scope cache|----+                       |
1407        // +----=----------------|invalid middle_scope cache|-----------+
1408        //
1409        // If, when adding `drop(new)` we do not invalidate the cached blocks for both
1410        // outer_scope and middle_scope, then, when building drops for the inner (rightmost)
1411        // scope, the old, cached blocks, without `drop(new)` will get used, producing the
1412        // wrong results.
1413        //
1414        // Note that this code iterates scopes from the innermost to the outermost,
1415        // invalidating caches of each scope visited. This way bare minimum of the
1416        // caches gets invalidated. i.e., if a new drop is added into the middle scope, the
1417        // cache of outer scope stays intact.
1418        //
1419        // Since we only cache drops for the unwind path and the coroutine drop
1420        // path, we only need to invalidate the cache for drops that happen on
1421        // the unwind or coroutine drop paths. This means that for
1422        // non-coroutines we don't need to invalidate caches for `DropKind::Storage`.
1423        let invalidate_caches = needs_drop || self.coroutine.is_some();
1424        for scope in self.scopes.scopes.iter_mut().rev() {
1425            if invalidate_caches {
1426                scope.invalidate_cache();
1427            }
1428
1429            if scope.region_scope == region_scope {
1430                let region_scope_span = region_scope.span(self.tcx, self.region_scope_tree);
1431                // Attribute scope exit drops to scope's closing brace.
1432                let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
1433
1434                scope.drops.push(DropData {
1435                    source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
1436                    local,
1437                    kind: drop_kind,
1438                });
1439
1440                return;
1441            }
1442        }
1443
1444        span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
1445    }
1446
1447    /// Schedule emission of a backwards incompatible drop lint hint.
1448    /// Applicable only to temporary values for now.
1449    #[instrument(level = "debug", skip(self))]
1450    pub(crate) fn schedule_backwards_incompatible_drop(
1451        &mut self,
1452        span: Span,
1453        region_scope: region::Scope,
1454        local: Local,
1455    ) {
1456        // Note that we are *not* gating BIDs here on whether they have significant destructor.
1457        // We need to know all of them so that we can capture potential borrow-checking errors.
1458        for scope in self.scopes.scopes.iter_mut().rev() {
1459            // Since we are inserting linting MIR statement, we have to invalidate the caches
1460            scope.invalidate_cache();
1461            if scope.region_scope == region_scope {
1462                let region_scope_span = region_scope.span(self.tcx, self.region_scope_tree);
1463                let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
1464
1465                scope.drops.push(DropData {
1466                    source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
1467                    local,
1468                    kind: DropKind::ForLint,
1469                });
1470
1471                return;
1472            }
1473        }
1474        span_bug!(
1475            span,
1476            "region scope {:?} not in scope to drop {:?} for linting",
1477            region_scope,
1478            local
1479        );
1480    }
1481
1482    /// Indicates that the "local operand" stored in `local` is
1483    /// *moved* at some point during execution (see `local_scope` for
1484    /// more information about what a "local operand" is -- in short,
1485    /// it's an intermediate operand created as part of preparing some
1486    /// MIR instruction). We use this information to suppress
1487    /// redundant drops on the non-unwind paths. This results in less
1488    /// MIR, but also avoids spurious borrow check errors
1489    /// (c.f. #64391).
1490    ///
1491    /// Example: when compiling the call to `foo` here:
1492    ///
1493    /// ```ignore (illustrative)
1494    /// foo(bar(), ...)
1495    /// ```
1496    ///
1497    /// we would evaluate `bar()` to an operand `_X`. We would also
1498    /// schedule `_X` to be dropped when the expression scope for
1499    /// `foo(bar())` is exited. This is relevant, for example, if the
1500    /// later arguments should unwind (it would ensure that `_X` gets
1501    /// dropped). However, if no unwind occurs, then `_X` will be
1502    /// unconditionally consumed by the `call`:
1503    ///
1504    /// ```ignore (illustrative)
1505    /// bb {
1506    ///   ...
1507    ///   _R = CALL(foo, _X, ...)
1508    /// }
1509    /// ```
1510    ///
1511    /// However, `_X` is still registered to be dropped, and so if we
1512    /// do nothing else, we would generate a `DROP(_X)` that occurs
1513    /// after the call. This will later be optimized out by the
1514    /// drop-elaboration code, but in the meantime it can lead to
1515    /// spurious borrow-check errors -- the problem, ironically, is
1516    /// not the `DROP(_X)` itself, but the (spurious) unwind pathways
1517    /// that it creates. See #64391 for an example.
1518    pub(crate) fn record_operands_moved(&mut self, operands: &[Spanned<Operand<'tcx>>]) {
1519        let local_scope = self.local_scope();
1520        let scope = self.scopes.scopes.last_mut().unwrap();
1521
1522        assert_eq!(scope.region_scope, local_scope, "local scope is not the topmost scope!",);
1523
1524        // look for moves of a local variable, like `MOVE(_X)`
1525        let locals_moved = operands.iter().flat_map(|operand| match operand.node {
1526            Operand::Copy(_) | Operand::Constant(_) => None,
1527            Operand::Move(place) => place.as_local(),
1528        });
1529
1530        for local in locals_moved {
1531            // check if we have a Drop for this operand and -- if so
1532            // -- add it to the list of moved operands. Note that this
1533            // local might not have been an operand created for this
1534            // call, it could come from other places too.
1535            if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
1536                scope.moved_locals.push(local);
1537            }
1538        }
1539    }
1540
1541    // Other
1542    // =====
1543
1544    /// Returns the [DropIdx] for the innermost drop if the function unwound at
1545    /// this point. The `DropIdx` will be created if it doesn't already exist.
1546    fn diverge_cleanup(&mut self) -> DropIdx {
1547        // It is okay to use dummy span because the getting scope index on the topmost scope
1548        // must always succeed.
1549        self.diverge_cleanup_target(self.scopes.topmost(), DUMMY_SP)
1550    }
1551
1552    /// This is similar to [diverge_cleanup](Self::diverge_cleanup) except its target is set to
1553    /// some ancestor scope instead of the current scope.
1554    /// It is possible to unwind to some ancestor scope if some drop panics as
1555    /// the program breaks out of a if-then scope.
1556    fn diverge_cleanup_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1557        let target = self.scopes.scope_index(target_scope, span);
1558        let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1559            .iter()
1560            .enumerate()
1561            .rev()
1562            .find_map(|(scope_idx, scope)| {
1563                scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block))
1564            })
1565            .unwrap_or((0, ROOT_NODE));
1566
1567        if uncached_scope > target {
1568            return cached_drop;
1569        }
1570
1571        let is_coroutine = self.coroutine.is_some();
1572        for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1573            for drop in &scope.drops {
1574                if is_coroutine || drop.kind == DropKind::Value {
1575                    cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop);
1576                }
1577            }
1578            scope.cached_unwind_block = Some(cached_drop);
1579        }
1580
1581        cached_drop
1582    }
1583
1584    /// Prepares to create a path that performs all required cleanup for a
1585    /// terminator that can unwind at the given basic block.
1586    ///
1587    /// This path terminates in Resume. The path isn't created until after all
1588    /// of the non-unwind paths in this item have been lowered.
1589    pub(crate) fn diverge_from(&mut self, start: BasicBlock) {
1590        debug_assert!(
1591            matches!(
1592                self.cfg.block_data(start).terminator().kind,
1593                TerminatorKind::Assert { .. }
1594                    | TerminatorKind::Call { .. }
1595                    | TerminatorKind::Drop { .. }
1596                    | TerminatorKind::FalseUnwind { .. }
1597                    | TerminatorKind::InlineAsm { .. }
1598            ),
1599            "diverge_from called on block with terminator that cannot unwind."
1600        );
1601
1602        let next_drop = self.diverge_cleanup();
1603        self.scopes.unwind_drops.add_entry_point(start, next_drop);
1604    }
1605
1606    /// Returns the [DropIdx] for the innermost drop for dropline (coroutine drop path).
1607    /// The `DropIdx` will be created if it doesn't already exist.
1608    fn diverge_dropline(&mut self) -> DropIdx {
1609        // It is okay to use dummy span because the getting scope index on the topmost scope
1610        // must always succeed.
1611        self.diverge_dropline_target(self.scopes.topmost(), DUMMY_SP)
1612    }
1613
1614    /// Similar to diverge_cleanup_target, but for dropline (coroutine drop path)
1615    fn diverge_dropline_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
1616        debug_assert!(
1617            self.coroutine.is_some(),
1618            "diverge_dropline_target is valid only for coroutine"
1619        );
1620        let target = self.scopes.scope_index(target_scope, span);
1621        let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
1622            .iter()
1623            .enumerate()
1624            .rev()
1625            .find_map(|(scope_idx, scope)| {
1626                scope.cached_coroutine_drop_block.map(|cached_block| (scope_idx + 1, cached_block))
1627            })
1628            .unwrap_or((0, ROOT_NODE));
1629
1630        if uncached_scope > target {
1631            return cached_drop;
1632        }
1633
1634        for scope in &mut self.scopes.scopes[uncached_scope..=target] {
1635            for drop in &scope.drops {
1636                cached_drop = self.scopes.coroutine_drops.add_drop(*drop, cached_drop);
1637            }
1638            scope.cached_coroutine_drop_block = Some(cached_drop);
1639        }
1640
1641        cached_drop
1642    }
1643
1644    /// Sets up a path that performs all required cleanup for dropping a
1645    /// coroutine, starting from the given block that ends in
1646    /// [TerminatorKind::Yield].
1647    ///
1648    /// This path terminates in CoroutineDrop.
1649    pub(crate) fn coroutine_drop_cleanup(&mut self, yield_block: BasicBlock) {
1650        debug_assert!(
1651            matches!(
1652                self.cfg.block_data(yield_block).terminator().kind,
1653                TerminatorKind::Yield { .. }
1654            ),
1655            "coroutine_drop_cleanup called on block with non-yield terminator."
1656        );
1657        let cached_drop = self.diverge_dropline();
1658        self.scopes.coroutine_drops.add_entry_point(yield_block, cached_drop);
1659    }
1660
1661    /// Utility function for *non*-scope code to build their own drops
1662    /// Force a drop at this point in the MIR by creating a new block.
1663    pub(crate) fn build_drop_and_replace(
1664        &mut self,
1665        block: BasicBlock,
1666        span: Span,
1667        place: Place<'tcx>,
1668        value: Rvalue<'tcx>,
1669    ) -> BlockAnd<()> {
1670        let source_info = self.source_info(span);
1671
1672        // create the new block for the assignment
1673        let assign = self.cfg.start_new_block();
1674        self.cfg.push_assign(assign, source_info, place, value.clone());
1675
1676        // create the new block for the assignment in the case of unwinding
1677        let assign_unwind = self.cfg.start_new_cleanup_block();
1678        self.cfg.push_assign(assign_unwind, source_info, place, value.clone());
1679
1680        self.cfg.terminate(
1681            block,
1682            source_info,
1683            TerminatorKind::Drop {
1684                place,
1685                target: assign,
1686                unwind: UnwindAction::Cleanup(assign_unwind),
1687                replace: true,
1688                drop: None,
1689                async_fut: None,
1690            },
1691        );
1692        self.diverge_from(block);
1693
1694        assign.unit()
1695    }
1696
1697    /// Creates an `Assert` terminator and return the success block.
1698    /// If the boolean condition operand is not the expected value,
1699    /// a runtime panic will be caused with the given message.
1700    pub(crate) fn assert(
1701        &mut self,
1702        block: BasicBlock,
1703        cond: Operand<'tcx>,
1704        expected: bool,
1705        msg: AssertMessage<'tcx>,
1706        span: Span,
1707    ) -> BasicBlock {
1708        let source_info = self.source_info(span);
1709        let success_block = self.cfg.start_new_block();
1710
1711        self.cfg.terminate(
1712            block,
1713            source_info,
1714            TerminatorKind::Assert {
1715                cond,
1716                expected,
1717                msg: Box::new(msg),
1718                target: success_block,
1719                unwind: UnwindAction::Continue,
1720            },
1721        );
1722        self.diverge_from(block);
1723
1724        success_block
1725    }
1726
1727    /// Unschedules any drops in the top scope.
1728    ///
1729    /// This is only needed for `match` arm scopes, because they have one
1730    /// entrance per pattern, but only one exit.
1731    pub(crate) fn clear_top_scope(&mut self, region_scope: region::Scope) {
1732        let top_scope = self.scopes.scopes.last_mut().unwrap();
1733
1734        assert_eq!(top_scope.region_scope, region_scope);
1735
1736        top_scope.drops.clear();
1737        top_scope.invalidate_cache();
1738    }
1739}
1740
1741/// Builds drops for `pop_scope` and `leave_top_scope`.
1742///
1743/// # Parameters
1744///
1745/// * `unwind_drops`, the drop tree data structure storing what needs to be cleaned up if unwind occurs
1746/// * `scope`, describes the drops that will occur on exiting the scope in regular execution
1747/// * `block`, the block to branch to once drops are complete (assuming no unwind occurs)
1748/// * `unwind_to`, describes the drops that would occur at this point in the code if a
1749///   panic occurred (a subset of the drops in `scope`, since we sometimes elide StorageDead and other
1750///   instructions on unwinding)
1751/// * `dropline_to`, describes the drops that would occur at this point in the code if a
1752///    coroutine drop occurred.
1753/// * `storage_dead_on_unwind`, if true, then we should emit `StorageDead` even when unwinding
1754/// * `arg_count`, number of MIR local variables corresponding to fn arguments (used to assert that we don't drop those)
1755fn build_scope_drops<'tcx, F>(
1756    cfg: &mut CFG<'tcx>,
1757    unwind_drops: &mut DropTree,
1758    coroutine_drops: &mut DropTree,
1759    scope: &Scope,
1760    block: BasicBlock,
1761    unwind_to: DropIdx,
1762    dropline_to: Option<DropIdx>,
1763    storage_dead_on_unwind: bool,
1764    arg_count: usize,
1765    is_async_drop: F,
1766) -> BlockAnd<()>
1767where
1768    F: Fn(Local) -> bool,
1769{
1770    debug!("build_scope_drops({:?} -> {:?}), dropline_to={:?}", block, scope, dropline_to);
1771
1772    // Build up the drops in evaluation order. The end result will
1773    // look like:
1774    //
1775    // [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
1776    //               |                    |                 |
1777    //               :                    |                 |
1778    //                                    V                 V
1779    // [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
1780    //
1781    // The horizontal arrows represent the execution path when the drops return
1782    // successfully. The downwards arrows represent the execution path when the
1783    // drops panic (panicking while unwinding will abort, so there's no need for
1784    // another set of arrows).
1785    //
1786    // For coroutines, we unwind from a drop on a local to its StorageDead
1787    // statement. For other functions we don't worry about StorageDead. The
1788    // drops for the unwind path should have already been generated by
1789    // `diverge_cleanup_gen`.
1790
1791    // `unwind_to` indicates what needs to be dropped should unwinding occur.
1792    // This is a subset of what needs to be dropped when exiting the scope.
1793    // As we unwind the scope, we will also move `unwind_to` backwards to match,
1794    // so that we can use it should a destructor panic.
1795    let mut unwind_to = unwind_to;
1796
1797    // The block that we should jump to after drops complete. We start by building the final drop (`drops[n]`
1798    // in the diagram above) and then build the drops (e.g., `drop[1]`, `drop[0]`) that come before it.
1799    // block begins as the successor of `drops[n]` and then becomes `drops[n]` so that `drops[n-1]`
1800    // will branch to `drops[n]`.
1801    let mut block = block;
1802
1803    // `dropline_to` indicates what needs to be dropped should coroutine drop occur.
1804    let mut dropline_to = dropline_to;
1805
1806    for drop_data in scope.drops.iter().rev() {
1807        let source_info = drop_data.source_info;
1808        let local = drop_data.local;
1809
1810        match drop_data.kind {
1811            DropKind::Value => {
1812                // `unwind_to` should drop the value that we're about to
1813                // schedule. If dropping this value panics, then we continue
1814                // with the *next* value on the unwind path.
1815                //
1816                // We adjust this BEFORE we create the drop (e.g., `drops[n]`)
1817                // because `drops[n]` should unwind to `drops[n-1]`.
1818                debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.local, drop_data.local);
1819                debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.kind, drop_data.kind);
1820                unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1821
1822                if let Some(idx) = dropline_to {
1823                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.local, drop_data.local);
1824                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.kind, drop_data.kind);
1825                    dropline_to = Some(coroutine_drops.drop_nodes[idx].next);
1826                }
1827
1828                // If the operand has been moved, and we are not on an unwind
1829                // path, then don't generate the drop. (We only take this into
1830                // account for non-unwind paths so as not to disturb the
1831                // caching mechanism.)
1832                if scope.moved_locals.contains(&local) {
1833                    continue;
1834                }
1835
1836                unwind_drops.add_entry_point(block, unwind_to);
1837                if let Some(to) = dropline_to
1838                    && is_async_drop(local)
1839                {
1840                    coroutine_drops.add_entry_point(block, to);
1841                }
1842
1843                let next = cfg.start_new_block();
1844                cfg.terminate(
1845                    block,
1846                    source_info,
1847                    TerminatorKind::Drop {
1848                        place: local.into(),
1849                        target: next,
1850                        unwind: UnwindAction::Continue,
1851                        replace: false,
1852                        drop: None,
1853                        async_fut: None,
1854                    },
1855                );
1856                block = next;
1857            }
1858            DropKind::ForLint => {
1859                // As in the `DropKind::Storage` case below:
1860                // normally lint-related drops are not emitted for unwind,
1861                // so we can just leave `unwind_to` unmodified, but in some
1862                // cases we emit things ALSO on the unwind path, so we need to adjust
1863                // `unwind_to` in that case.
1864                if storage_dead_on_unwind {
1865                    debug_assert_eq!(
1866                        unwind_drops.drop_nodes[unwind_to].data.local,
1867                        drop_data.local
1868                    );
1869                    debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.kind, drop_data.kind);
1870                    unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1871                }
1872
1873                // If the operand has been moved, and we are not on an unwind
1874                // path, then don't generate the drop. (We only take this into
1875                // account for non-unwind paths so as not to disturb the
1876                // caching mechanism.)
1877                if scope.moved_locals.contains(&local) {
1878                    continue;
1879                }
1880
1881                cfg.push(
1882                    block,
1883                    Statement {
1884                        source_info,
1885                        kind: StatementKind::BackwardIncompatibleDropHint {
1886                            place: Box::new(local.into()),
1887                            reason: BackwardIncompatibleDropReason::Edition2024,
1888                        },
1889                    },
1890                );
1891            }
1892            DropKind::Storage => {
1893                // Ordinarily, storage-dead nodes are not emitted on unwind, so we don't
1894                // need to adjust `unwind_to` on this path. However, in some specific cases
1895                // we *do* emit storage-dead nodes on the unwind path, and in that case now that
1896                // the storage-dead has completed, we need to adjust the `unwind_to` pointer
1897                // so that any future drops we emit will not register storage-dead.
1898                if storage_dead_on_unwind {
1899                    debug_assert_eq!(
1900                        unwind_drops.drop_nodes[unwind_to].data.local,
1901                        drop_data.local
1902                    );
1903                    debug_assert_eq!(unwind_drops.drop_nodes[unwind_to].data.kind, drop_data.kind);
1904                    unwind_to = unwind_drops.drop_nodes[unwind_to].next;
1905                }
1906                if let Some(idx) = dropline_to {
1907                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.local, drop_data.local);
1908                    debug_assert_eq!(coroutine_drops.drop_nodes[idx].data.kind, drop_data.kind);
1909                    dropline_to = Some(coroutine_drops.drop_nodes[idx].next);
1910                }
1911                // Only temps and vars need their storage dead.
1912                assert!(local.index() > arg_count);
1913                cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) });
1914            }
1915        }
1916    }
1917    block.unit()
1918}
1919
1920impl<'a, 'tcx: 'a> Builder<'a, 'tcx> {
1921    /// Build a drop tree for a breakable scope.
1922    ///
1923    /// If `continue_block` is `Some`, then the tree is for `continue` inside a
1924    /// loop. Otherwise this is for `break` or `return`.
1925    fn build_exit_tree(
1926        &mut self,
1927        mut drops: DropTree,
1928        else_scope: region::Scope,
1929        span: Span,
1930        continue_block: Option<BasicBlock>,
1931    ) -> Option<BlockAnd<()>> {
1932        let blocks = drops.build_mir::<ExitScopes>(&mut self.cfg, continue_block);
1933        let is_coroutine = self.coroutine.is_some();
1934
1935        // Link the exit drop tree to unwind drop tree.
1936        if drops.drop_nodes.iter().any(|drop_node| drop_node.data.kind == DropKind::Value) {
1937            let unwind_target = self.diverge_cleanup_target(else_scope, span);
1938            let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1);
1939            for (drop_idx, drop_node) in drops.drop_nodes.iter_enumerated().skip(1) {
1940                match drop_node.data.kind {
1941                    DropKind::Storage | DropKind::ForLint => {
1942                        if is_coroutine {
1943                            let unwind_drop = self
1944                                .scopes
1945                                .unwind_drops
1946                                .add_drop(drop_node.data, unwind_indices[drop_node.next]);
1947                            unwind_indices.push(unwind_drop);
1948                        } else {
1949                            unwind_indices.push(unwind_indices[drop_node.next]);
1950                        }
1951                    }
1952                    DropKind::Value => {
1953                        let unwind_drop = self
1954                            .scopes
1955                            .unwind_drops
1956                            .add_drop(drop_node.data, unwind_indices[drop_node.next]);
1957                        self.scopes.unwind_drops.add_entry_point(
1958                            blocks[drop_idx].unwrap(),
1959                            unwind_indices[drop_node.next],
1960                        );
1961                        unwind_indices.push(unwind_drop);
1962                    }
1963                }
1964            }
1965        }
1966        // Link the exit drop tree to dropline drop tree (coroutine drop path) for async drops
1967        if is_coroutine
1968            && drops.drop_nodes.iter().any(|DropNode { data, next: _ }| {
1969                data.kind == DropKind::Value && self.is_async_drop(data.local)
1970            })
1971        {
1972            let dropline_target = self.diverge_dropline_target(else_scope, span);
1973            let mut dropline_indices = IndexVec::from_elem_n(dropline_target, 1);
1974            for (drop_idx, drop_data) in drops.drop_nodes.iter_enumerated().skip(1) {
1975                let coroutine_drop = self
1976                    .scopes
1977                    .coroutine_drops
1978                    .add_drop(drop_data.data, dropline_indices[drop_data.next]);
1979                match drop_data.data.kind {
1980                    DropKind::Storage | DropKind::ForLint => {}
1981                    DropKind::Value => {
1982                        if self.is_async_drop(drop_data.data.local) {
1983                            self.scopes.coroutine_drops.add_entry_point(
1984                                blocks[drop_idx].unwrap(),
1985                                dropline_indices[drop_data.next],
1986                            );
1987                        }
1988                    }
1989                }
1990                dropline_indices.push(coroutine_drop);
1991            }
1992        }
1993        blocks[ROOT_NODE].map(BasicBlock::unit)
1994    }
1995
1996    /// Build the unwind and coroutine drop trees.
1997    pub(crate) fn build_drop_trees(&mut self) {
1998        if self.coroutine.is_some() {
1999            self.build_coroutine_drop_trees();
2000        } else {
2001            Self::build_unwind_tree(
2002                &mut self.cfg,
2003                &mut self.scopes.unwind_drops,
2004                self.fn_span,
2005                &mut None,
2006            );
2007        }
2008    }
2009
2010    fn build_coroutine_drop_trees(&mut self) {
2011        // Build the drop tree for dropping the coroutine while it's suspended.
2012        let drops = &mut self.scopes.coroutine_drops;
2013        let cfg = &mut self.cfg;
2014        let fn_span = self.fn_span;
2015        let blocks = drops.build_mir::<CoroutineDrop>(cfg, None);
2016        if let Some(root_block) = blocks[ROOT_NODE] {
2017            cfg.terminate(
2018                root_block,
2019                SourceInfo::outermost(fn_span),
2020                TerminatorKind::CoroutineDrop,
2021            );
2022        }
2023
2024        // Build the drop tree for unwinding in the normal control flow paths.
2025        let resume_block = &mut None;
2026        let unwind_drops = &mut self.scopes.unwind_drops;
2027        Self::build_unwind_tree(cfg, unwind_drops, fn_span, resume_block);
2028
2029        // Build the drop tree for unwinding when dropping a suspended
2030        // coroutine.
2031        //
2032        // This is a different tree to the standard unwind paths here to
2033        // prevent drop elaboration from creating drop flags that would have
2034        // to be captured by the coroutine. I'm not sure how important this
2035        // optimization is, but it is here.
2036        for (drop_idx, drop_node) in drops.drop_nodes.iter_enumerated() {
2037            if let DropKind::Value = drop_node.data.kind
2038                && let Some(bb) = blocks[drop_idx]
2039            {
2040                debug_assert!(drop_node.next < drops.drop_nodes.next_index());
2041                drops.entry_points.push((drop_node.next, bb));
2042            }
2043        }
2044        Self::build_unwind_tree(cfg, drops, fn_span, resume_block);
2045    }
2046
2047    fn build_unwind_tree(
2048        cfg: &mut CFG<'tcx>,
2049        drops: &mut DropTree,
2050        fn_span: Span,
2051        resume_block: &mut Option<BasicBlock>,
2052    ) {
2053        let blocks = drops.build_mir::<Unwind>(cfg, *resume_block);
2054        if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) {
2055            cfg.terminate(resume, SourceInfo::outermost(fn_span), TerminatorKind::UnwindResume);
2056
2057            *resume_block = blocks[ROOT_NODE];
2058        }
2059    }
2060}
2061
2062// DropTreeBuilder implementations.
2063
2064struct ExitScopes;
2065
2066impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes {
2067    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
2068        cfg.start_new_block()
2069    }
2070    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
2071        // There should be an existing terminator with real source info and a
2072        // dummy TerminatorKind. Replace it with a proper goto.
2073        // (The dummy is added by `break_scope` and `break_for_else`.)
2074        let term = cfg.block_data_mut(from).terminator_mut();
2075        if let TerminatorKind::UnwindResume = term.kind {
2076            term.kind = TerminatorKind::Goto { target: to };
2077        } else {
2078            span_bug!(term.source_info.span, "unexpected dummy terminator kind: {:?}", term.kind);
2079        }
2080    }
2081}
2082
2083struct CoroutineDrop;
2084
2085impl<'tcx> DropTreeBuilder<'tcx> for CoroutineDrop {
2086    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
2087        cfg.start_new_block()
2088    }
2089    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
2090        let term = cfg.block_data_mut(from).terminator_mut();
2091        if let TerminatorKind::Yield { ref mut drop, .. } = term.kind {
2092            *drop = Some(to);
2093        } else if let TerminatorKind::Drop { ref mut drop, .. } = term.kind {
2094            *drop = Some(to);
2095        } else {
2096            span_bug!(
2097                term.source_info.span,
2098                "cannot enter coroutine drop tree from {:?}",
2099                term.kind
2100            )
2101        }
2102    }
2103}
2104
2105struct Unwind;
2106
2107impl<'tcx> DropTreeBuilder<'tcx> for Unwind {
2108    fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
2109        cfg.start_new_cleanup_block()
2110    }
2111    fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
2112        let term = &mut cfg.block_data_mut(from).terminator_mut();
2113        match &mut term.kind {
2114            TerminatorKind::Drop { unwind, .. } => {
2115                if let UnwindAction::Cleanup(unwind) = *unwind {
2116                    let source_info = term.source_info;
2117                    cfg.terminate(unwind, source_info, TerminatorKind::Goto { target: to });
2118                } else {
2119                    *unwind = UnwindAction::Cleanup(to);
2120                }
2121            }
2122            TerminatorKind::FalseUnwind { unwind, .. }
2123            | TerminatorKind::Call { unwind, .. }
2124            | TerminatorKind::Assert { unwind, .. }
2125            | TerminatorKind::InlineAsm { unwind, .. } => {
2126                *unwind = UnwindAction::Cleanup(to);
2127            }
2128            TerminatorKind::Goto { .. }
2129            | TerminatorKind::SwitchInt { .. }
2130            | TerminatorKind::UnwindResume
2131            | TerminatorKind::UnwindTerminate(_)
2132            | TerminatorKind::Return
2133            | TerminatorKind::TailCall { .. }
2134            | TerminatorKind::Unreachable
2135            | TerminatorKind::Yield { .. }
2136            | TerminatorKind::CoroutineDrop
2137            | TerminatorKind::FalseEdge { .. } => {
2138                span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind)
2139            }
2140        }
2141    }
2142}