LORA UTILS#

Classes#

class backbone.utils.lora_utils.LayerScale(dim, init_values=1e-05, inplace=False)[source]#

Bases: Module

forward(x)[source]#
class backbone.utils.lora_utils.LoRAAttention(dim, num_heads=8, qkv_bias=False, attn_drop=0.0, proj_drop=0.0)[source]#

Bases: Module

Attention layer as used in Vision Transformer. Adapted to support LoRA-style parameters.

Parameters:
  • dim – Number of input channels

  • num_heads – Number of attention heads

  • qkv_bias – If True, add a learnable bias to q, k, v

  • attn_drop – Dropout rate for attention weights

  • proj_drop – Dropout rate after the final projection

forward(x, AB=None, **kwargs)[source]#

Forward pass of the attention layer. Supports AB for LoRA-style parameters (checkout docs for VisionTransformer.forward).

Parameters:
  • x – Input tensor

  • AB (dict | None) – Dictionary containing LoRA-style parameters for the layer

class backbone.utils.lora_utils.LoRALayer(lora_dropout)[source]#

Bases: object

class backbone.utils.lora_utils.LoRALinear(in_features, out_features, lora_dropout=0.0, fan_in_fan_out=False, **kwargs)[source]#

Bases: Linear, LoRALayer

forward(x, AB=None)[source]#
reset_parameters()[source]#
class backbone.utils.lora_utils.LoRAMlp(in_features, hidden_features=None, out_features=None, act_layer=<class 'torch.nn.modules.activation.GELU'>, norm_layer=None, bias=True, drop=0.0, use_conv=False)[source]#

Bases: Module

MLP as used in Vision Transformer, MLP-Mixer and related networks. Adapted to support LoRA-style parameters.

forward(x, AB=None, **kwargs)[source]#

Forward pass of the MLP layer. Supports AB for LoRA-style parameters (checkout docs for VisionTransformer.forward).

Parameters:
  • x (Tensor) – Input tensor

  • AB (dict | None) – Dictionary containing LoRA-style parameters for the layer

Functions#

backbone.utils.lora_utils.to_2tuple(x)#