Conversation
|
|
||
| [tool.hatch.build] | ||
| include = [ | ||
| "src/OMPyCompile/libs/*" |
| @@ -0,0 +1,53 @@ | |||
| <!--- SPDX-License-Identifier: Apache-2.0 --> | |||
| # OMPyInfer | |||
There was a problem hiding this comment.
OMPyCompile? Look like this file was copied from PyInfer.
| This package provides a python driver to run inference on ONNX model compiled onnx-mlir. | ||
| There is a helloworld example in the tests folder with the package: | ||
| ``` | ||
| # IBM Confidential |
| @@ -0,0 +1,3 @@ | |||
| # IBM Confidential | |||
|
|
||
| ################# use_local_compiler.py ####################################### | ||
| # | ||
| # Copyright 2021-2025 The IBM Research Authors. |
| class CompileSession: | ||
| def __init__(self, model_path, **kwargs): | ||
| self.debug = False | ||
| # self.output_dir = tempfile.TemporaryDirectory() |
There was a problem hiding this comment.
Could you initialize all members of this class here so that we know what are its members, e.g. self.compiler_image_name self.model_path, etc.?
| # Compiled library | ||
| if self.compile_options != "": | ||
| command_str += " " + self.compile_options | ||
| print(self.compile_options) |
There was a problem hiding this comment.
Should we only print this in verbose/debug mode?
| self.container_model_dirname, self.model_basename | ||
| ) | ||
|
|
||
| print(command_str) |
There was a problem hiding this comment.
Should we only print this in verbose/debug mode?
| def compile(model_path, **kwargs): | ||
| sess = CompileSession(model_path, **kwargs) | ||
| sess.Compile() | ||
| return sess.get_compiled_model_path() |
There was a problem hiding this comment.
How about .constants.bin file or perhaps config file in the future? Do we need to return its path also?
| # Alternative implementation: use env variable ONNX_MLIR_HOME? | ||
| # compile_args is the flags passed to onnx-mlir | ||
| r = OMPyCompile.compile( | ||
| "./test_add.onnx", |
There was a problem hiding this comment.
If we use test_add.mlir, we don't need to push test_add.onnx to the repository.
| if "-o" in options_list: | ||
| # Convert the output to absolute path so that the compilation | ||
| # can be done with compiler image. | ||
| self.compiled_model = os.path.abspath( |
There was a problem hiding this comment.
We may have to handle the onnx-mlir add-large.mlir -o=bibibibibi... and it could be the last arg, so we need to test for this possibility before adding a "+1" in the option list (and possibly test for the index to be in range as we could have a erroneous -o without a name (an error I think).
This PR refactored the previous onnxmlirdocker.py to keep only the compilation functionality. It provide a function, compile, to return the compiled model.
Though the compilation is provided simply with one python script file, I created a package for it. The reason is that it would be easier for user to install and use.
Examples can be found in the tests directory.
If container is used, change the compilation arguments to: