-
Notifications
You must be signed in to change notification settings - Fork 445
Unskip tests that were previously failing #846
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #846 +/- ##
==========================================
- Coverage 91.54% 91.13% -0.42%
==========================================
Files 17 17
Lines 1621 1635 +14
==========================================
+ Hits 1484 1490 +6
- Misses 137 145 +8 see 5 files with indirect coverage changes Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR unskips two previously broken tests in the test suite that now pass with Python 3.13, addressing issue #831.
- Removes
@unittest.skipdecorators from two test methods - Tests are now enabled and can run as part of the test suite
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
@Borda I was following up on earlier work from #828 and #836 which cleaned up build errors and Python versions. I opened #831 as a reminder to go back and check tests that were previously failing for 3.13 and 3.14. #838 was still a draft WIP PR from Copilot. I was also working on #841 test results to support 3.14 #841 (comment) As for whether or not I like bots, I view them as a tool but not a replacement for human PRs and reviews. I think that the tool's results can be good in some cases, but in other scenarios the tool recommends unnecessary churn and takes a less conservative solution. That said, I am ok with us using Copilot if there is a human-in-the-loop. Here's what we have been doing on Python: https://devguide.python.org/getting-started/generative-ai/ Overall, I'm happy that we have rebooted moving papermill forward. 🚀 |
|
I agree that bots shall not be released in wild but work along side with human or under supervision...
This was the case, as I signed him the task and prompted him and was asign to the PR too... But if the way for is just human PRs only fine with me and will go further 🐼
Sorry to read that, didn't know that Python conservative and not favoring developer productivity. 😕 |
We are not against it. We are being conservative since we have seen a bunch of AI slop in the past two years. Personally, a few of us are working on adding md files for better AI context to the repo. |
|
Feel free to keep using copilot @Borda. I think that there are benefits with it. |
Sounds like refusing to use pressure pot because at its early days they exploded a few times and ignore what was the progress since then... 🦩 |
|
@Borda that is not an argument but only a weird analogy. I like to use my pressure pot for potatoes but even though it has never exploded, I refuse to put in my pizza and chocolate. also using more pressure pots doesn't directly cause fewer explosions. |
@winniehell cool, seems you understood, and thank you for rephrasing what I just said :) |
Fixes #831
Tested locally with 3.13 which was failing earlier in the day.