Losses#

MoCo#

eztorch.losses.compute_moco_loss(q, k, k_global, use_keys, queue, temp=0.2, rank=0)[source]#

Compute the SCE loss.

Parameters:
  • q (

    Default:) – The representations of the queries.

  • k (

    Default:) – The representations of the keys.

  • k_global (

    Default:) – The global representations of the keys.

  • use_keys (

    Default:) – Whether to use the non-positive elements from key.

  • temp (

    Default:, optional) – Temperature applied to the query similarities.
    Default: 0.2

  • rank (

    Default:, optional) – Rank of the device for positive labels.
    Default: 0

Return type:

Default:

Returns:

The loss.

eztorch.losses.compute_mocov3_loss(q, k, temp=1.0, rank=0)[source]#

Compute the MoCov3 loss.

Parameters:
  • q (

    Default:) – The representations of the queries.

  • k (

    Default:) – The global representations of the keys.

  • temp (

    Default:, optional) – Temperature for softmax.
    Default: 1.0

  • rank (

    Default:, optional) – Rank of the device for positive labels.
    Default: 0

Return type:

Default:

Returns:

The loss.

ReSSL#

eztorch.losses.compute_ressl_loss(q, k, k_global, use_keys, queue, mask, temp=0.1, temp_m=0.04, LARGE_NUM=1000000000.0)[source]#

Compute the RESSL loss.

Parameters:
  • q (

    Default:) – The representations of the queries.

  • k (

    Default:) – The representations of the keys.

  • k_global (

    Default:) – The global representations of the keys.

  • use_keys (

    Default:) – Whether to use the non-positive elements from key.

  • queue (

    Default:) – The queue of representations.

  • mask (

    Default:) – Mask of positives for the query.

  • temp (

    Default:, optional) – Temperature applied to the query similarities.
    Default: 0.1

  • temp_m (

    Default:, optional) – Temperature applied to the keys similarities.
    Default: 0.04

  • LARGE_NUM (

    Default:, optional) – Large number to mask elements.
    Default: 1000000000.0

Return type:

Default:

Returns:

The loss.

eztorch.losses.compute_ressl_mask(batch_size, num_negatives, use_keys=True, rank=0, world_size=1, device='cpu')[source]#

Precompute the mask for ReSSL.

Parameters:
  • batch_size (

    Default:) – The local batch size.

  • num_negatives (

    Default:) – The number of negatives besides the non-positive key elements.

  • use_keys (

    Default:, optional) – Whether to use the non-positive elements from the key as negatives.
    Default: True

  • rank (

    Default:, optional) – Rank of the current process.
    Default: 0

  • world_size (

    Default:, optional) – Number of processes that perform training.
    Default: 1

  • device (

    Default:, optional) – Device that performs training.
    Default: 'cpu'

Return type:

Default:

Returns:

The mask.

SCE#

eztorch.losses.compute_sce_loss(q, k, k_global, use_keys, queue, mask, coeff, temp=0.1, temp_m=0.07, LARGE_NUM=1000000000.0)[source]#

Compute the SCE loss.

Parameters:
  • q (

    Default:) – The representations of the queries.

  • k (

    Default:) – The representations of the keys.

  • k_global (

    Default:) – The global representations of the keys.

  • use_keys (

    Default:) – Whether to use the non-positive elements from key.

  • queue (

    Default:) – The queue of representations.

  • mask (

    Default:) – Mask of positives for the query.

  • coeff (

    Default:) – Coefficient between the contrastive and relational aspects.

  • temp (

    Default:, optional) – Temperature applied to the query similarities.
    Default: 0.1

  • temp_m (

    Default:, optional) – Temperature applied to the keys similarities.
    Default: 0.07

  • LARGE_NUM (

    Default:, optional) – Large number to mask elements.
    Default: 1000000000.0

Return type:

Default:

Returns:

The loss.

eztorch.losses.compute_sce_mask(batch_size, num_negatives, use_keys=True, rank=0, world_size=1, device='cpu')[source]#

Precompute the mask for SCE.

Parameters:
  • batch_size (

    Default:) – The local batch size.

  • num_negatives (

    Default:) – The number of negatives besides the non-positive key elements.

  • use_keys (

    Default:, optional) – Whether to use the non-positive elements from the key as negatives.
    Default: True

  • rank (

    Default:, optional) – Rank of the current process.
    Default: 0

  • world_size (

    Default:, optional) – Number of processes that perform training.
    Default: 1

  • device (

    Default:, optional) – Device that performs training.
    Default: 'cpu'

Return type:

Default:

Returns:

The mask.

SCE tokens#

eztorch.losses.compute_sce_token_loss(q, k, k_global, queue, mask_sim_q, mask_sim_k, mask_prob_q, mask_log_q, coeff, temp=0.1, temp_m=0.07, LARGE_NUM=1000000000.0)[source]#

Compute the SCE loss for several tokens as output.

Parameters:
  • q (

    Default:) – The representations of the queries.

  • k (

    Default:) – The representations of the keys.

  • k_global (

    Default:) – The global representations of the keys.

  • queue (

    Default:) – The queue of representations.

  • mask_sim_q (

    Default:) – Mask to ignore similarities for the query.

  • mask_sim_k (

    Default:) – Mask to ignore similarities for the keys.

  • mask_prob_q (

    Default:) – Mask of positives for the query.

  • mask_log_q (

    Default:) – Mask of elements to keep after applying log to the query distribution.

  • coeff (

    Default:) – Coefficient between the contrastive and relational aspects.

  • temp (

    Default:, optional) – Temperature applied to the query similarities.
    Default: 0.1

  • temp_m (

    Default:, optional) – Temperature applied to the keys similarities.
    Default: 0.07

  • LARGE_NUM (

    Default:, optional) – Large number to mask elements.
    Default: 1000000000.0

Return type:

Default:

Returns:

The loss.

eztorch.losses.compute_sce_token_masks(batch_size, num_tokens, num_negatives, positive_radius=0, keep_aligned_positive=True, use_keys=True, use_all_keys=False, rank=0, world_size=1, device='cpu')[source]#

Precompute the masks for SCE with tokens.

Parameters:
  • batch_size (

    Default:) – The local batch size.

  • num_tokens (

    Default:) – The number of tokens per instance.

  • num_negatives (

    Default:) – The number of negatives besides the non-positive key elements.

  • positive_radius (

    Default:, optional) – The radius of adjacent tokens to consider as positives.
    Default: 0

  • keep_aligned_positive (

    Default:, optional) – Whether to keep the aligned token as positive.
    Default: True

  • use_keys (

    Default:, optional) – Whether to use the non-positive elements from the aligned key as negatives.
    Default: True

  • use_all_keys (

    Default:, optional) – Whether to use the non-positive elements from all the gathered keys as negatives.
    Default: False

  • rank (

    Default:, optional) – Rank of the current process.
    Default: 0

  • world_size (

    Default:, optional) – Number of processes that perform training.
    Default: 1

  • device (

    Default:, optional) – Device that performs training.
    Default: 'cpu'

Return type:

Default:

Returns:

  • Mask to ignore similarities for the query.

  • Mask to ignore similarities for the keys.

  • Mask of positives for the query.

  • Mask to keep log values.

  • The number of positives for the specific token.

SimCLR#

eztorch.losses.compute_simclr_loss(z, z_global, pos_mask, neg_mask, temp)[source]#

Compute the simCLR loss.

Parameters:
  • z (

    Default:) – The local representations.

  • z_global (

    Default:) – The gathered representations.

  • pos_mask – Positives mask.

  • neg_mask – Negative masks.

  • temp (

    Default:) – Temperature for softmax.

Return type:

Default:

Returns:

The loss.

eztorch.losses.compute_simclr_masks(batch_size, num_crops=2, rank=0, world_size=1, device='cpu')[source]#

Compute positive and negative masks for SimCLR.

Parameters:
  • batch_size (

    Default:) – The local batch size per iteration.

  • num_crops (

    Default:, optional) – The number of crops per instance.
    Default: 2

  • rank (

    Default:, optional) – Rank of the current process.
    Default: 0

  • world_size (

    Default:, optional) – Number of processes that perform training.
    Default: 1

  • device (

    Default:, optional) – Device that performs training.
    Default: 'cpu'

Return type:

Default:

Returns:

The positive and negative masks.

Spot loss#

eztorch.losses.compute_spot_loss_fn(class_preds, class_target, has_label, ignore_class, class_weights=None, mixup_weights=None, class_loss_type=LossType.BCE, alpha=0.25, gamma=2)[source]#

Compute the soccernet loss function which is a classification loss.

Parameters:
  • class_preds (

    Default:) – Predictions for the classes. Expected shape: (B, T, C).

  • class_target (

    Default:) – Multi-label encoded class. Can be continuous label in case of mixup. Expected shape: (B, T, C).

  • has_label (

    Default:) – Whether there is a label. Expected shape: (B’, T, C).

  • ignore_class (

    Default:) – Whether to ignore class. Expected shape: (B’, T, C).

  • class_weights (

    Default:, optional) – Weights of negatives and positives for BCE loss. Expected shape: (2, C).
    Default: None

  • mixup_weights (

    Default:, optional) – Weights for mixup for loss. Expected shape: (B’, T, C).
    Default: None

  • class_loss_type (

    Default:, optional) – Type of loss to use. Can be BCE, softmax or Focal.
    Default: LossType.BCE

  • alpha (

    Default:, optional) – For focal loss.
    Default: 0.25

  • gamma (

    Default:, optional) – For focal loss.
    Default: 2

Return type:

Default:

Returns:

The reduced sum of the classification losses.