Workbench - daskify tripyview - new version #204
Workbench - daskify tripyview - new version #204patrickscholz wants to merge 146 commits intomainfrom
Conversation
…or dask client initialisation, adding spinupcycles index to input paths ...
…r computation in time spice
…ed horizontal coarse graining alogorythm, chunk horizontal plooting for pcolor plot,
…here are still to many arrows in quiver plot at large grids
…rove transport and heatflux computation through transects now project transport on normal vector of main transect
…hich is more universal across the package
…ure environment if inline, notebook or widget can be chossen by input
… spinupcycle number to the inputpath ...
…ssue when reference plot, add loading of bolus velocity when doing streaml
…lose memory leakages and minimize dask task graph
… dask optimisation
…y with the land sea mask patch
|
This appears to be a major upgrade. As we are looking towards making tripyview an automatic post-processing for any fesom run done with esm_tools (also where fesom is a component model in coupled setups). Would it be feasible to pick this up again? |
|
@JanStreffing: In principle yes, this branch is supposed to be the next release version for tripyview. Im still tinkering here and there a bit around as my own alpha tester whenever i encounter some issues while using it. This version supposedly should work a bit better for large meshes because now it is fully dask parallelized before it was a mixture of dask and joblib. I also quite substantially overhauled the reading and processing of the the mesh object, as well as the 1d,2d,3d vector rotation and elem2node interpolation with numba, which also had quite a performance gain. |
… absolute path of the fesom_logo.png
…ector routine in polar projection, now we do an own and point method to do this vector rotation into the projection by our self
…more features into the plot
Use default Kernel "python3" instead. This should fix issues like `Unexpected Error: No such kernel named py39_new` when running tripyrun. Error was raised as papermill by default tries executing the notebooks using the kernel specified in the notebook metadata. This fix tells papermill to use the Kernel called "python3" and ignore notebook metadata. Details: Once any conda environment is activated it seems that the kernel "python3" is associated with the activated conda environment independent of the name of the conda environment. Only if no environment is activated "python3" is the name of the kernel associated with the default e.g. Levante python env. This can be verified by running `jupyter kernelspec list` Only tested on Levante with python 3.12
tripyrun/papermill: Ignore TemplateNotebook Kernel
papermill does not seem to like "=" signs in comments It would come up with the error: ``` Unable to parse line 6 'parallel_nthread = 2 # number of threads per worker, i.e. number worker = parallel_nprc/parallel_nthread'. ```
Update all Notebooks to Suppress Tripyrun Parsing Error
No description provided.