[PIMO] compare to benchmark#2378
[PIMO] compare to benchmark#2378jpcbertoldo wants to merge 5 commits intoopen-edge-platform:mainfrom
Conversation
Signed-off-by: jpcbertoldo <[email protected]>
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
|
i bypassed an issue raised by a pre-commit hook, which idk how to properly solve |
|
@jpcbertoldo, bandit complains because this approach is not safe. Instead you could use a safer library such as |
Signed-off-by: jpcbertoldo <[email protected]>
It solved that problem, but now I have another 😅 I could not figure it out (i followed the suggested commands from the error and from that page) but it won't work. Maybe it'll be ok in the CI? |
Signed-off-by: jpcbertoldo <[email protected]>
Signed-off-by: jpcbertoldo <[email protected]>
|
@jpcbertoldo, you need to install the following |
I tried already:/ |
📝 Description
the idea is to provide a utility to compare to aupimo's paper's benchmark results out of the box
it fetchs the json-formated results from the official repo, then (to be implemented) returns a dataframe where the user's models are compared to them
currently this is partially show cased in the notebook
https://github.com/openvinotoolkit/anomalib/blob/main/notebooks/700_metrics/701e_aupimo_advanced_iv.ipynb
Note 1: i put the code in
utils_benchmark.pybecause putting it inutils.pywould cause circular importNote 2: i left
_validate_benchmark_modeland_validate_benchmark_datasetinside
utils_benchmark.py(not in_validate.py) on purpose because they are very specific to this module and its functions, it would not be used elsewhere✨ Changes
Select what type of change your PR is:
✅ Checklist
Before you submit your pull request, please make sure you have completed the following steps:
For more information about code review checklists, see the Code Review Checklist.