Reference for ultralytics/nn/modules/activation.py
Note
This file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/nn/modules/activation.py. If you spot a problem please help fix it by contributing a Pull Request 🛠️. Thank you 🙏!
ultralytics.nn.modules.activation.AGLU
AGLU(device=None, dtype=None)
Bases: Module
Unified activation function module from AGLU.
This class implements a parameterized activation function with learnable parameters lambda and kappa, based on the AGLU (Adaptive Gated Linear Unit) approach (https://github.com/kostas1515/AGLU).
Attributes:
Name | Type | Description |
---|---|---|
act |
Softplus
|
Softplus activation function with negative beta. |
lambd |
Parameter
|
Learnable lambda parameter initialized with uniform distribution. |
kappa |
Parameter
|
Learnable kappa parameter initialized with uniform distribution. |
Methods:
Name | Description |
---|---|
forward |
Compute the forward pass of the Unified activation function. |
Examples:
>>> import torch
>>> m = AGLU()
>>> input = torch.randn(2)
>>> output = m(input)
>>> print(output.shape)
torch.Size([2])
Source code in ultralytics/nn/modules/activation.py
32 33 34 35 36 37 |
|
forward
forward(x: Tensor) -> torch.Tensor
Apply the Adaptive Gated Linear Unit (AGLU) activation function.
This forward method implements the AGLU activation function with learnable parameters lambda and kappa. The function applies a transformation that adaptively combines linear and non-linear components.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
Tensor
|
Input tensor to apply the activation function to. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
Output tensor after applying the AGLU activation function, with the same shape as the input. |
Source code in ultralytics/nn/modules/activation.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
|