Skip to content

implementation of optimizer #17

@QingyvHan

Description

@QingyvHan

Sorry, I refered to google but still confused about the GlabalAdam optimizer. Can someone give me a HELP??

        for group in self.param_groups:                             
            for p in group['params']:
                state = self.state[p]
                state['step'] = 0
                state['exp_avg'] = torch.zeros_like(p.data)
                state['exp_avg_sq'] = torch.zeros_like(p.data)

                state['exp_avg'].share_memory_()
                state['exp_avg_sq'].share_memory_()

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions