util.misc

Misc functions, including distributed helpers.

Mostly copy-paste from torchvision references.

Classes

SmoothedValue

Track a series of values and provide access to smoothed values over a

Functions

all_gather(data)

Run all_gather on arbitrary picklable data (not necessarily tensors)

reduce_dict(input_dict[, average])

param input_dict:

all the values will be reduced

setup_for_distributed(is_master)

This function disables printing when not in master process.

accuracy(output, target[, topk])

Computes the precision@k for the specified values of k.

interpolate(→ torch.Tensor)

Equivalent to nn.functional.interpolate, but with support for empty batch sizes.

Module Contents

class util.misc.SmoothedValue(window_size=20, fmt=None)[source]

Track a series of values and provide access to smoothed values over a window or the global series average.

synchronize_between_processes()[source]

Warning: does not synchronize the deque!

util.misc.all_gather(data)[source]

Run all_gather on arbitrary picklable data (not necessarily tensors) :param data: any picklable object

Returns:

list of data gathered from each rank

Return type:

list[data]

util.misc.reduce_dict(input_dict, average=True)[source]
Parameters:
  • input_dict (dict) – all the values will be reduced

  • average (bool) – whether to do average or sum

Reduce the values in the dictionary from all processes so that all processes have the averaged results. Returns a dict with the same fields as input_dict, after reduction.

util.misc.setup_for_distributed(is_master)[source]

This function disables printing when not in master process.

util.misc.accuracy(output, target, topk=(1,))[source]

Computes the precision@k for the specified values of k.

util.misc.interpolate(input: torch.Tensor, size: List[int] | None = None, scale_factor: float | None = None, mode: str = 'nearest', align_corners: bool | None = None) torch.Tensor[source]

Equivalent to nn.functional.interpolate, but with support for empty batch sizes.

This will eventually be supported natively by PyTorch, and this class can go away.