-
Notifications
You must be signed in to change notification settings - Fork 4
Added Mac (M-series) support #325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🧪 CI InsightsHere's what we observed from your CI run for 1c3f3ee. 🟢 All jobs passed!But CI Insights is watching 👀 |
Of course, this seems to be because it is not working, will continue working on this |
lol |
|
this is the issue it would close and which would have some more details: |
5941e46 to
76c47bc
Compare
restore original linux version for tests adjust CI for apt-get remove redundant assignment add system display just for sanity move sanity printout to test and remove args keyword remove extra summary check feat: debugged core logic fix: skip a couple linux-specific tests fix: fix skipif feat: final touches
76c47bc to
f4384cb
Compare
|
This is now passing on all GitHub Mac runners (and my device) The failing CI seems to be particular environment setup issue: https://github.com/con/duct/actions/runs/18768853617/job/53549456968?pr=325 @asmacdo any ideas? I will mention I skip that step in the workflow on all Mac runners since they run into the same exact issue (assuming something about tox setup is tuned specifically to linux) |
@CodyCBakerPhD that failure is not related to this PR, and is also failing in the daily tests starting last week. |
|
@CodyCBakerPhD they are in the captured logs. Its surprising that with a sample coming in every 0.01 seconds (aggregated every 0.1 seconds) that we only got 1 aggregated sample and it contained none of the child processes. I'll try bumping the time way up to see if its just a race condition or something else. |
I see some at https://github.com/con/duct/actions/runs/18912288068/job/53986067528?pr=325#step:6:404 but only for 3.14 (which I have honestly given up on for now since it is the most problematic) I was talking about the other parts of the testing matrix |
|
And for the rest of the matrix,
|
|
For the tmp debugging it only logs when the assertion will fail which is why it didnt show up elsewhere The time bump proves its just a race condition. The remaining question is why is it so slow, is it due to something specific to the runner or is duct just super slow on mac? We should try to bring that test duration number down a bit, but at least we now know its working! For Signal exits, we probably need to dig into that more. If the exit code isnt 0 that is a real problem I think. The expected behavior is that duct intercepts the signal and passes to the child process. When the child is killed, duct exits normally with 0. So -2 -> 0 seems wrong to me. Do you happen to know if signal processing is handled differently on macs? |
|
It could be that the signal tests are also sensitive to race conditions. If the process startup is extremely slow, its possible that the signal handler doesn't get registered in time, which would explain negative error codes. I still cant explain the error code of 1 though for the signal test. We are going to pause on this for the moment until the combine-cli PR is merged. After rebase, @CodyCBakerPhD and I agreed to drop macos-intel part of the matrix to get this in-- though duct is probably working on those systems. After adding config files, we can add a separate config for the mac intel envs which would run slower tests, and hopefully avoid the race conditions. |
|
the output of that |
|
I added a minimal retry mechanism for empty samples that can no doubt be improved in the future Before I disabled intel-based tests, they seemed to be failing a bit more reliably on the 'sanity green' check as well as the sigint returns (either NOTE: I finally saw it occur in ubuntu as well (https://github.com/con/duct/actions/runs/19088640558/job/54534324996?pr=325#step:6:441) perhaps it is simply a random occurence in the test suite in all platforms with ubuntu having much lower rates of occurence than I propose we deal with that in the follow-up (#335), either acknowledging that to be an acceptable behavior on that architecture and adjusting the tests correspondingly or someone attempting to dig further into the 'issue' than I can with my devices For now I am proceeding under the guide that only M-series 'latest' macs are guaranteed to work as intended (all |
|
(forgive the constant small pushing BTW - it is the easiest way I have of triggering retries to guarantee randomness is 'gone' since I don't have permissions on the CI) |
|
lol TBH probably just easier for me at this point to open a new PR - will do that sometime this weekend |
|
Re-started in #351 |
Fixes #82 [edit asmacdo]
I tried running the test suite on Mac and encountered many failures.
This is because for some reason, the output for small values seems to get mapped to[edit: they were actually not being captured at all from differences in the OS-dependentnullpscommand]There was also lots of terminal spam about[edit: this was the source of the issue]psnot being called correctly so I fixed that as well. Did not seem to actually alter the runner ofducthowever, just wasn't nice to look atSome other minor comments have been left on lines below