Skip to content

Conversation

@AlexTMallen
Copy link
Collaborator

No description provided.

Copy link
Collaborator

@lauritowal lauritowal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Run with elk elicit gpt2 imdb --no_balance True --disable_cache --max_examples 100 100 --num_gpus 1 --max_inlp_iter 4 and seems to work.

Added some comments though

binarize: bool = False
"""Whether to binarize the dataset labels for multi-class datasets."""

no_balance: bool = False
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just make it
balance: bool = True ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would also avoid having that:
balance=not cfg.no_balance

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because it would be unclear how to use the flag to disable balancing from the CLA. --balance False or something is weirder than --no_balance

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--balance False does not seem weirder than --no_balance True to me.
But okay, it's fine for me

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think I agree with you now


if max_iter is not None:
d = min(d, max_iter)
max_iter = max_iter or d
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's just some refactoring which has nothing to do with the balancing I guesS?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, I also added a max_iter flag and this was a necessary refactoring


def train_supervised(
data: dict[str, tuple], device: str, mode: str
data: dict[str, tuple], device: str, mode: str, max_inlp_iter: int | None = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's a new feature not related to the balancing either, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants