pub struct FunctionCx<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> {Show 17 fields
instance: Instance<'tcx>,
mir: &'tcx Body<'tcx>,
debug_context: Option<FunctionDebugContext<'tcx, Bx::DIScope, Bx::DILocation>>,
llfn: Bx::Function,
cx: &'a Bx::CodegenCx,
fn_abi: &'tcx FnAbi<'tcx, Ty<'tcx>>,
personality_slot: Option<PlaceRef<'tcx, Bx::Value>>,
cached_llbbs: IndexVec<BasicBlock, CachedLlbb<Bx::BasicBlock>>,
cleanup_kinds: Option<IndexVec<BasicBlock, CleanupKind>>,
funclets: IndexVec<BasicBlock, Option<Bx::Funclet>>,
landing_pads: IndexVec<BasicBlock, Option<Bx::BasicBlock>>,
unreachable_block: Option<Bx::BasicBlock>,
terminate_block: Option<(Bx::BasicBlock, UnwindTerminateReason)>,
cold_blocks: IndexVec<BasicBlock, bool>,
locals: Locals<'tcx, Bx::Value>,
per_local_var_debug_info: Option<IndexVec<Local, Vec<PerLocalVarDebugInfo<'tcx, Bx::DIVariable>>>>,
caller_location: Option<OperandRef<'tcx, Bx::Value>>,
}
Expand description
Master context for codegenning from MIR.
Fields§
§instance: Instance<'tcx>
§mir: &'tcx Body<'tcx>
§debug_context: Option<FunctionDebugContext<'tcx, Bx::DIScope, Bx::DILocation>>
§llfn: Bx::Function
§cx: &'a Bx::CodegenCx
§fn_abi: &'tcx FnAbi<'tcx, Ty<'tcx>>
§personality_slot: Option<PlaceRef<'tcx, Bx::Value>>
When unwinding is initiated, we have to store this personality value somewhere so that we can load it and re-use it in the resume instruction. The personality is (afaik) some kind of value used for C++ unwinding, which must filter by type: we don’t really care about it very much. Anyway, this value contains an alloca into which the personality is stored and then later loaded when generating the DIVERGE_BLOCK.
cached_llbbs: IndexVec<BasicBlock, CachedLlbb<Bx::BasicBlock>>
A backend BasicBlock
for each MIR BasicBlock
, created lazily
as-needed (e.g. RPO reaching it or another block branching to it).
cleanup_kinds: Option<IndexVec<BasicBlock, CleanupKind>>
The funclet status of each basic block
funclets: IndexVec<BasicBlock, Option<Bx::Funclet>>
When targeting MSVC, this stores the cleanup info for each funclet BB.
This is initialized at the same time as the landing_pads
entry for the
funclets’ head block, i.e. when needed by an unwind / cleanup_ret
edge.
landing_pads: IndexVec<BasicBlock, Option<Bx::BasicBlock>>
This stores the cached landing/cleanup pad block for a given BB.
unreachable_block: Option<Bx::BasicBlock>
Cached unreachable block
terminate_block: Option<(Bx::BasicBlock, UnwindTerminateReason)>
Cached terminate upon unwinding block and its reason
cold_blocks: IndexVec<BasicBlock, bool>
A bool flag for each basic block indicating whether it is a cold block. A cold block is a block that is unlikely to be executed at runtime.
locals: Locals<'tcx, Bx::Value>
The location where each MIR arg/var/tmp/ret is stored. This is
usually an PlaceRef
representing an alloca, but not always:
sometimes we can skip the alloca and just store the value
directly using an OperandRef
, which makes for tighter LLVM
IR. The conditions for using an OperandRef
are as follows:
- the type of the local must be judged “immediate” by
is_llvm_immediate
- the operand must never be referenced indirectly
- we should not take its address using the
&
operator - nor should it appear in a place path like
tmp.a
- we should not take its address using the
- the operand must be defined by an rvalue that can generate immediate values
Avoiding allocs can also be important for certain intrinsics,
notably expect
.
per_local_var_debug_info: Option<IndexVec<Local, Vec<PerLocalVarDebugInfo<'tcx, Bx::DIVariable>>>>
All VarDebugInfo
from the MIR body, partitioned by Local
.
This is None
if no variable debuginfo/names are needed.
caller_location: Option<OperandRef<'tcx, Bx::Value>>
Caller location propagated if this function has #[track_caller]
.
Implementations§
Source§impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
Codegen implementations for some terminator variants.
impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
Codegen implementations for some terminator variants.
Sourcefn codegen_resume_terminator(
&mut self,
helper: TerminatorCodegenHelper<'tcx>,
bx: &mut Bx,
)
fn codegen_resume_terminator( &mut self, helper: TerminatorCodegenHelper<'tcx>, bx: &mut Bx, )
Generates code for a Resume
terminator.
fn codegen_switchint_terminator( &mut self, helper: TerminatorCodegenHelper<'tcx>, bx: &mut Bx, discr: &Operand<'tcx>, targets: &SwitchTargets, )
fn codegen_return_terminator(&mut self, bx: &mut Bx)
fn codegen_drop_terminator( &mut self, helper: TerminatorCodegenHelper<'tcx>, bx: &mut Bx, source_info: &SourceInfo, location: Place<'tcx>, target: BasicBlock, unwind: UnwindAction, mergeable_succ: bool, ) -> MergingSucc
fn codegen_assert_terminator( &mut self, helper: TerminatorCodegenHelper<'tcx>, bx: &mut Bx, terminator: &Terminator<'tcx>, cond: &Operand<'tcx>, expected: bool, msg: &AssertMessage<'tcx>, target: BasicBlock, unwind: UnwindAction, mergeable_succ: bool, ) -> MergingSucc
fn codegen_terminate_terminator( &mut self, helper: TerminatorCodegenHelper<'tcx>, bx: &mut Bx, terminator: &Terminator<'tcx>, reason: UnwindTerminateReason, )
Sourcefn codegen_panic_intrinsic(
&mut self,
helper: &TerminatorCodegenHelper<'tcx>,
bx: &mut Bx,
intrinsic: IntrinsicDef,
instance: Instance<'tcx>,
source_info: SourceInfo,
target: Option<BasicBlock>,
unwind: UnwindAction,
mergeable_succ: bool,
) -> Option<MergingSucc>
fn codegen_panic_intrinsic( &mut self, helper: &TerminatorCodegenHelper<'tcx>, bx: &mut Bx, intrinsic: IntrinsicDef, instance: Instance<'tcx>, source_info: SourceInfo, target: Option<BasicBlock>, unwind: UnwindAction, mergeable_succ: bool, ) -> Option<MergingSucc>
Returns Some
if this is indeed a panic intrinsic and codegen is done.
fn codegen_call_terminator( &mut self, helper: TerminatorCodegenHelper<'tcx>, bx: &mut Bx, terminator: &Terminator<'tcx>, func: &Operand<'tcx>, args: &[Spanned<Operand<'tcx>>], destination: Place<'tcx>, target: Option<BasicBlock>, unwind: UnwindAction, fn_span: Span, mergeable_succ: bool, ) -> MergingSucc
fn codegen_asm_terminator( &mut self, helper: TerminatorCodegenHelper<'tcx>, bx: &mut Bx, asm_macro: InlineAsmMacro, terminator: &Terminator<'tcx>, template: &[InlineAsmTemplatePiece], operands: &[InlineAsmOperand<'tcx>], options: InlineAsmOptions, line_spans: &[Span], targets: &[BasicBlock], unwind: UnwindAction, instance: Instance<'_>, mergeable_succ: bool, ) -> MergingSucc
pub(crate) fn codegen_block(&mut self, bb: BasicBlock)
pub(crate) fn codegen_block_as_unreachable(&mut self, bb: BasicBlock)
fn codegen_terminator( &mut self, bx: &mut Bx, bb: BasicBlock, terminator: &'tcx Terminator<'tcx>, ) -> MergingSucc
fn codegen_argument( &mut self, bx: &mut Bx, op: OperandRef<'tcx, Bx::Value>, llargs: &mut Vec<Bx::Value>, arg: &ArgAbi<'tcx, Ty<'tcx>>, lifetime_ends_after_call: &mut Vec<(Bx::Value, Size)>, )
fn codegen_arguments_untupled( &mut self, bx: &mut Bx, operand: &Operand<'tcx>, llargs: &mut Vec<Bx::Value>, args: &[ArgAbi<'tcx, Ty<'tcx>>], lifetime_ends_after_call: &mut Vec<(Bx::Value, Size)>, ) -> usize
pub(super) fn get_caller_location( &mut self, bx: &mut Bx, source_info: SourceInfo, ) -> OperandRef<'tcx, Bx::Value>
fn get_personality_slot(&mut self, bx: &mut Bx) -> PlaceRef<'tcx, Bx::Value>
Sourcefn landing_pad_for(&mut self, bb: BasicBlock) -> Bx::BasicBlock
fn landing_pad_for(&mut self, bb: BasicBlock) -> Bx::BasicBlock
Returns the landing/cleanup pad wrapper around the given basic block.
fn landing_pad_for_uncached(&mut self, bb: BasicBlock) -> Bx::BasicBlock
fn unreachable_block(&mut self) -> Bx::BasicBlock
fn terminate_block(&mut self, reason: UnwindTerminateReason) -> Bx::BasicBlock
Sourcepub fn llbb(&mut self, bb: BasicBlock) -> Bx::BasicBlock
pub fn llbb(&mut self, bb: BasicBlock) -> Bx::BasicBlock
Get the backend BasicBlock
for a MIR BasicBlock
, either already
cached in self.cached_llbbs
, or created on demand (and cached).
Sourcepub(crate) fn try_llbb(&mut self, bb: BasicBlock) -> Option<Bx::BasicBlock>
pub(crate) fn try_llbb(&mut self, bb: BasicBlock) -> Option<Bx::BasicBlock>
Like llbb
, but may fail if the basic block should be skipped.
fn make_return_dest( &mut self, bx: &mut Bx, dest: Place<'tcx>, fn_ret: &ArgAbi<'tcx, Ty<'tcx>>, llargs: &mut Vec<Bx::Value>, ) -> ReturnDest<'tcx, Bx::Value>
fn store_return( &mut self, bx: &mut Bx, dest: ReturnDest<'tcx, Bx::Value>, ret_abi: &ArgAbi<'tcx, Ty<'tcx>>, llval: Bx::Value, )
Source§impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
pub(crate) fn eval_mir_constant_to_operand( &self, bx: &mut Bx, constant: &ConstOperand<'tcx>, ) -> OperandRef<'tcx, Bx::Value>
pub fn eval_mir_constant( &self, constant: &ConstOperand<'tcx>, ) -> ConstValue<'tcx>
Sourcefn eval_unevaluated_mir_constant_to_valtree(
&self,
constant: &ConstOperand<'tcx>,
) -> Result<Result<ValTree<'tcx>, Ty<'tcx>>, ErrorHandled>
fn eval_unevaluated_mir_constant_to_valtree( &self, constant: &ConstOperand<'tcx>, ) -> Result<Result<ValTree<'tcx>, Ty<'tcx>>, ErrorHandled>
This is a convenience helper for immediate_const_vector
. It has the precondition
that the given constant
is an Const::Unevaluated
and must be convertible to
a ValTree
. If you want a more general version of this, talk to wg-const-eval
on zulip.
Note that this function is cursed, since usually MIR consts should not be evaluated to valtrees!
Sourcepub fn immediate_const_vector(
&mut self,
bx: &Bx,
constant: &ConstOperand<'tcx>,
) -> (Bx::Value, Ty<'tcx>)
pub fn immediate_const_vector( &mut self, bx: &Bx, constant: &ConstOperand<'tcx>, ) -> (Bx::Value, Ty<'tcx>)
process constant containing SIMD shuffle indices & constant vectors
Source§impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
pub(crate) fn codegen_coverage( &self, bx: &mut Bx, kind: &CoverageKind, scope: SourceScope, )
Source§impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
pub fn set_debug_loc(&self, bx: &mut Bx, source_info: SourceInfo)
fn dbg_loc(&self, source_info: SourceInfo) -> Option<Bx::DILocation>
fn adjusted_span_and_dbg_scope( &self, source_info: SourceInfo, ) -> Option<(Bx::DIScope, Option<Bx::DILocation>, Span)>
fn spill_operand_to_stack( operand: OperandRef<'tcx, Bx::Value>, name: Option<String>, bx: &mut Bx, ) -> PlaceRef<'tcx, Bx::Value>
Sourcepub(crate) fn debug_introduce_local(&self, bx: &mut Bx, local: Local)
pub(crate) fn debug_introduce_local(&self, bx: &mut Bx, local: Local)
Apply debuginfo and/or name, after creating the alloca
for a local,
or initializing the local with an operand (whichever applies).
fn debug_introduce_local_as_var( &self, bx: &mut Bx, local: Local, base: PlaceRef<'tcx, Bx::Value>, var: PerLocalVarDebugInfo<'tcx, Bx::DIVariable>, )
pub(crate) fn debug_introduce_locals( &self, bx: &mut Bx, consts: Vec<ConstDebugInfo<'a, 'tcx, Bx>>, )
Sourcepub(crate) fn compute_per_local_var_debug_info(
&self,
bx: &mut Bx,
) -> Option<(IndexVec<Local, Vec<PerLocalVarDebugInfo<'tcx, Bx::DIVariable>>>, Vec<ConstDebugInfo<'a, 'tcx, Bx>>)>
pub(crate) fn compute_per_local_var_debug_info( &self, bx: &mut Bx, ) -> Option<(IndexVec<Local, Vec<PerLocalVarDebugInfo<'tcx, Bx::DIVariable>>>, Vec<ConstDebugInfo<'a, 'tcx, Bx>>)>
Partition all VarDebugInfo
in self.mir
, by their base Local
.
Source§impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
Sourcepub fn codegen_intrinsic_call(
&mut self,
bx: &mut Bx,
instance: Instance<'tcx>,
args: &[OperandRef<'tcx, Bx::Value>],
result: PlaceRef<'tcx, Bx::Value>,
source_info: SourceInfo,
) -> Result<(), Instance<'tcx>>
pub fn codegen_intrinsic_call( &mut self, bx: &mut Bx, instance: Instance<'tcx>, args: &[OperandRef<'tcx, Bx::Value>], result: PlaceRef<'tcx, Bx::Value>, source_info: SourceInfo, ) -> Result<(), Instance<'tcx>>
In the Err
case, returns the instance that should be called instead.
Source§impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
pub(super) fn initialize_locals( &mut self, values: Vec<LocalRef<'tcx, Bx::Value>>, )
pub(super) fn overwrite_local( &mut self, local: Local, value: LocalRef<'tcx, Bx::Value>, )
Source§impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
fn maybe_codegen_consume_direct( &mut self, bx: &mut Bx, place_ref: PlaceRef<'tcx>, ) -> Option<OperandRef<'tcx, Bx::Value>>
pub fn codegen_consume( &mut self, bx: &mut Bx, place_ref: PlaceRef<'tcx>, ) -> OperandRef<'tcx, Bx::Value>
pub fn codegen_operand( &mut self, bx: &mut Bx, operand: &Operand<'tcx>, ) -> OperandRef<'tcx, Bx::Value>
Source§impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
pub fn codegen_place( &mut self, bx: &mut Bx, place_ref: PlaceRef<'tcx>, ) -> PlaceRef<'tcx, Bx::Value>
pub fn monomorphized_place_ty(&self, place_ref: PlaceRef<'tcx>) -> Ty<'tcx>
Source§impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
pub(crate) fn codegen_rvalue( &mut self, bx: &mut Bx, dest: PlaceRef<'tcx, Bx::Value>, rvalue: &Rvalue<'tcx>, )
Sourcefn codegen_transmute(
&mut self,
bx: &mut Bx,
src: OperandRef<'tcx, Bx::Value>,
dst: PlaceRef<'tcx, Bx::Value>,
)
fn codegen_transmute( &mut self, bx: &mut Bx, src: OperandRef<'tcx, Bx::Value>, dst: PlaceRef<'tcx, Bx::Value>, )
Transmutes the src
value to the destination type by writing it to dst
.
See also Self::codegen_transmute_operand
for cases that can be done
without needing a pre-allocated place for the destination.
Sourcepub(crate) fn codegen_transmute_operand(
&mut self,
bx: &mut Bx,
operand: OperandRef<'tcx, Bx::Value>,
cast: TyAndLayout<'tcx>,
) -> OperandValue<Bx::Value>
pub(crate) fn codegen_transmute_operand( &mut self, bx: &mut Bx, operand: OperandRef<'tcx, Bx::Value>, cast: TyAndLayout<'tcx>, ) -> OperandValue<Bx::Value>
Transmutes an OperandValue
to another OperandValue
.
This is supported only for cases where Self::rvalue_creates_operand
returns true
, and will ICE otherwise. (In particular, anything that
would need to alloca
in order to return a PlaceValue
will ICE,
expecting those to go via Self::codegen_transmute
instead where
the destination place is already allocated.)
Sourcefn cast_immediate(
&self,
bx: &mut Bx,
imm: Bx::Value,
from_scalar: Scalar,
from_backend_ty: Bx::Type,
to_scalar: Scalar,
to_backend_ty: Bx::Type,
) -> Option<Bx::Value>
fn cast_immediate( &self, bx: &mut Bx, imm: Bx::Value, from_scalar: Scalar, from_backend_ty: Bx::Type, to_scalar: Scalar, to_backend_ty: Bx::Type, ) -> Option<Bx::Value>
Cast one of the immediates from an OperandValue::Immediate
or an OperandValue::Pair
to an immediate of the target type.
Returns None
if the cast is not possible.
pub(crate) fn codegen_rvalue_unsized( &mut self, bx: &mut Bx, indirect_dest: PlaceRef<'tcx, Bx::Value>, rvalue: &Rvalue<'tcx>, )
pub(crate) fn codegen_rvalue_operand( &mut self, bx: &mut Bx, rvalue: &Rvalue<'tcx>, ) -> OperandRef<'tcx, Bx::Value>
fn evaluate_array_len(&mut self, bx: &mut Bx, place: Place<'tcx>) -> Bx::Value
Sourcefn codegen_place_to_pointer(
&mut self,
bx: &mut Bx,
place: Place<'tcx>,
mk_ptr_ty: impl FnOnce(TyCtxt<'tcx>, Ty<'tcx>) -> Ty<'tcx>,
) -> OperandRef<'tcx, Bx::Value>
fn codegen_place_to_pointer( &mut self, bx: &mut Bx, place: Place<'tcx>, mk_ptr_ty: impl FnOnce(TyCtxt<'tcx>, Ty<'tcx>) -> Ty<'tcx>, ) -> OperandRef<'tcx, Bx::Value>
Codegen an Rvalue::RawPtr
or Rvalue::Ref
fn codegen_scalar_binop( &mut self, bx: &mut Bx, op: BinOp, lhs: Bx::Value, rhs: Bx::Value, lhs_ty: Ty<'tcx>, rhs_ty: Ty<'tcx>, ) -> Bx::Value
fn codegen_wide_ptr_binop( &mut self, bx: &mut Bx, op: BinOp, lhs_addr: Bx::Value, lhs_extra: Bx::Value, rhs_addr: Bx::Value, rhs_extra: Bx::Value, _input_ty: Ty<'tcx>, ) -> Bx::Value
fn codegen_scalar_checked_binop( &mut self, bx: &mut Bx, op: BinOp, lhs: Bx::Value, rhs: Bx::Value, input_ty: Ty<'tcx>, ) -> OperandValue<Bx::Value>
Sourcepub(crate) fn rvalue_creates_operand(
&self,
rvalue: &Rvalue<'tcx>,
span: Span,
) -> bool
pub(crate) fn rvalue_creates_operand( &self, rvalue: &Rvalue<'tcx>, span: Span, ) -> bool
Returns true
if the rvalue
can be computed into an OperandRef
,
rather than needing a full PlaceRef
for the assignment destination.
This is used by the super::analyze
code to decide which MIR locals
can stay as SSA values (as opposed to generating alloca
slots for them).
As such, some paths here return true
even where the specific rvalue
will not actually take the operand path because the result type is such
that it always gets an alloca
, but where it’s not worth re-checking the
layout in this code when the right thing will happen anyway.
Source§impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
pub(crate) fn codegen_statement( &mut self, bx: &mut Bx, statement: &Statement<'tcx>, )
Source§impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx>
pub fn monomorphize<T>(&self, value: T) -> T
Auto Trait Implementations§
impl<'a, 'tcx, Bx> DynSend for FunctionCx<'a, 'tcx, Bx>where
<Bx as BackendTypes>::Function: DynSend,
<Bx as BuilderMethods<'a, 'tcx>>::CodegenCx: DynSync,
<Bx as BackendTypes>::BasicBlock: DynSend,
<Bx as BackendTypes>::DIScope: DynSend,
<Bx as BackendTypes>::Value: DynSend,
<Bx as BackendTypes>::Funclet: DynSend,
<Bx as BackendTypes>::DILocation: DynSend,
<Bx as BackendTypes>::DIVariable: DynSend,
impl<'a, 'tcx, Bx> DynSync for FunctionCx<'a, 'tcx, Bx>where
<Bx as BackendTypes>::Function: DynSync,
<Bx as BuilderMethods<'a, 'tcx>>::CodegenCx: DynSync,
<Bx as BackendTypes>::BasicBlock: DynSync,
<Bx as BackendTypes>::DIScope: DynSync,
<Bx as BackendTypes>::Value: DynSync,
<Bx as BackendTypes>::Funclet: DynSync,
<Bx as BackendTypes>::DILocation: DynSync,
<Bx as BackendTypes>::DIVariable: DynSync,
impl<'a, 'tcx, Bx> Freeze for FunctionCx<'a, 'tcx, Bx>where
<Bx as BackendTypes>::Function: Freeze,
<Bx as BackendTypes>::BasicBlock: Freeze,
<Bx as BackendTypes>::Value: Freeze,
impl<'a, 'tcx, Bx> !RefUnwindSafe for FunctionCx<'a, 'tcx, Bx>
impl<'a, 'tcx, Bx> Send for FunctionCx<'a, 'tcx, Bx>where
<Bx as BackendTypes>::Function: Send,
<Bx as BuilderMethods<'a, 'tcx>>::CodegenCx: Sync,
<Bx as BackendTypes>::BasicBlock: Send,
<Bx as BackendTypes>::Funclet: Send,
<Bx as BackendTypes>::Value: Send,
<Bx as BackendTypes>::DIScope: Send,
<Bx as BackendTypes>::DILocation: Send,
<Bx as BackendTypes>::DIVariable: Send,
impl<'a, 'tcx, Bx> Sync for FunctionCx<'a, 'tcx, Bx>where
<Bx as BackendTypes>::Function: Sync,
<Bx as BuilderMethods<'a, 'tcx>>::CodegenCx: Sync,
<Bx as BackendTypes>::BasicBlock: Sync,
<Bx as BackendTypes>::Value: Sync,
<Bx as BackendTypes>::Funclet: Sync,
<Bx as BackendTypes>::DIScope: Sync,
<Bx as BackendTypes>::DILocation: Sync,
<Bx as BackendTypes>::DIVariable: Sync,
impl<'a, 'tcx, Bx> Unpin for FunctionCx<'a, 'tcx, Bx>where
<Bx as BackendTypes>::Function: Unpin,
<Bx as BackendTypes>::BasicBlock: Unpin,
<Bx as BackendTypes>::Value: Unpin,
<Bx as BackendTypes>::Funclet: Unpin,
<Bx as BackendTypes>::DIScope: Unpin,
<Bx as BackendTypes>::DILocation: Unpin,
<Bx as BackendTypes>::DIVariable: Unpin,
impl<'a, 'tcx, Bx> !UnwindSafe for FunctionCx<'a, 'tcx, Bx>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T, R> CollectAndApply<T, R> for T
impl<T, R> CollectAndApply<T, R> for T
Source§impl<T> Filterable for T
impl<T> Filterable for T
Source§fn filterable(
self,
filter_name: &'static str,
) -> RequestFilterDataProvider<T, fn(DataRequest<'_>) -> bool>
fn filterable( self, filter_name: &'static str, ) -> RequestFilterDataProvider<T, fn(DataRequest<'_>) -> bool>
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<P> IntoQueryParam<P> for P
impl<P> IntoQueryParam<P> for P
fn into_query_param(self) -> P
Source§impl<T> MaybeResult<T> for T
impl<T> MaybeResult<T> for T
Source§impl<T> Pointable for T
impl<T> Pointable for T
Source§impl<I, T, U> Upcast<I, U> for Twhere
U: UpcastFrom<I, T>,
impl<I, T, U> Upcast<I, U> for Twhere
U: UpcastFrom<I, T>,
Source§impl<I, T> UpcastFrom<I, T> for T
impl<I, T> UpcastFrom<I, T> for T
fn upcast_from(from: T, _tcx: I) -> T
Source§impl<Tcx, T> Value<Tcx> for Twhere
Tcx: DepContext,
impl<Tcx, T> Value<Tcx> for Twhere
Tcx: DepContext,
default fn from_cycle_error( tcx: Tcx, cycle_error: &CycleError, _guar: ErrorGuaranteed, ) -> T
Source§impl<T> WithSubscriber for T
impl<T> WithSubscriber for T
Source§fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
Source§fn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
impl<T> ErasedDestructor for Twhere
T: 'static,
impl<T> MaybeSendSync for T
Layout§
Note: Unable to compute type layout, possibly due to this type having generic parameters. Layout can only be computed for concrete, fully-instantiated types.