cleanup on floorplan_to_place#3734
Conversation
Signed-off-by: Eder Monteiro <[email protected]>
Signed-off-by: Eder Monteiro <[email protected]>
Signed-off-by: Eder Monteiro <[email protected]>
Signed-off-by: Eder Monteiro <[email protected]>
| if { [info exists ::env(KEEP_VARS)] && $::env(KEEP_VARS) == 1 } { | ||
| return | ||
| } |
There was a problem hiding this comment.
@maliberty I'm not sure if this is the best solution to prevent having the non-stage variables being deleted on the single-run. I'm open to suggestions here.
There was a problem hiding this comment.
I think this simple and works well enough. The other option would be to capture all envars at the start and restore them between stages in floorplan_to_place.tcl
Signed-off-by: Eder Monteiro <[email protected]>
There was a problem hiding this comment.
This is starting to resemble the old single flow (0e02037)
| if { [info exists ::env(KEEP_VARS)] && $::env(KEEP_VARS) == 1 } { | ||
| return | ||
| } |
There was a problem hiding this comment.
I think this simple and works well enough. The other option would be to capture all envars at the start and restore them between stages in floorplan_to_place.tcl
|
Excellent initiative and starting point. I am sure we can refine this, but already DRY. @MrAMS FYI Useful to make Optuna scripts to extract metrics rather than running flows |
@oharboe |
|
The core hypothesis is that we can achieve accurate To optimize this Design Space Exploration (DSE), I propose the following:
Note: The ability to skip writing interim |
|
This is a fantastic proposal! I couldn't agree more with the overall approach. One concern I have is regarding Optuna. While it natively supports pruning for single-objective optimization, it currently does not support pruning for multi-objective optimization. Since my goal is to explore the Pareto Frontier, losing the ability to prune unpromising trials early is a significant limitation we need to consider. Maybe we can use I have another concern regarding the resource overhead when combining Ray Tune (or other supports parallel trials) with Bazel. Since we need to isolate the output_base for each worker to prevent locking issues, this implies spawning multiple Bazel server instances (JVMs) simultaneously. I'm worried this might lead to excessive memory consumption and potential OOM (Out of Memory) issues, as each idle Bazel server already carries a significant footprint. Althogh we can use "multi-slot Bazel concurrency" workaround as I mentioned in The-OpenROAD-Project/bazel-orfs/pull/473, but it is not elegant. We could certainly opt to disable parallel trials, but as I mentioned in my previous email, I’ve noticed that OpenROAD's CPU utilization is surprisingly low, even on my laptop. It doesn't seem to leverage multi-core architectures effectively. If we forgo parallel trials (task-level parallelism), we would need to figure out how to maximize OpenROAD's internal multi-threading capabilities. I suspect this will be extremely difficult, as my understanding is that many underlying EDA algorithms are inherently serial. That said, I'm not an expert on OpenROAD's internals, so I am very open to corrections or suggestions if anyone knows a way to unlock better multi-core performance within the tool itself. |
|
We can trivially have N (static number easily on the order of maximum cores) orfs_flow() in a single bazel build that we each configure with a different parameter, will follow up out of band. |
No description provided.