LORA UTILS#
Classes#
- class backbone.utils.lora_utils.LayerScale(dim, init_values=1e-05, inplace=False)[source]#
Bases:
Module
- class backbone.utils.lora_utils.LoRAAttention(dim, num_heads=8, qkv_bias=False, attn_drop=0.0, proj_drop=0.0)[source]#
Bases:
Module
Attention layer as used in Vision Transformer. Adapted to support LoRA-style parameters.
- Parameters:
dim – Number of input channels
num_heads – Number of attention heads
qkv_bias – If True, add a learnable bias to q, k, v
attn_drop – Dropout rate for attention weights
proj_drop – Dropout rate after the final projection
- class backbone.utils.lora_utils.LoRALinear(in_features, out_features, lora_dropout=0.0, fan_in_fan_out=False, **kwargs)[source]#
-
- class backbone.utils.lora_utils.LoRAMlp(in_features, hidden_features=None, out_features=None, act_layer=<class 'torch.nn.modules.activation.GELU'>, norm_layer=None, bias=True, drop=0.0, use_conv=False)[source]#
Bases:
Module
MLP as used in Vision Transformer, MLP-Mixer and related networks. Adapted to support LoRA-style parameters.
Functions#
- backbone.utils.lora_utils.to_2tuple(x)#