Skip to content

Conversation

@m-beau
Copy link

@m-beau m-beau commented Jan 6, 2026

Implements #4294

  • Add widget

@alejoe91 alejoe91 changed the title good times extension Good times extension Jan 6, 2026
@alejoe91 alejoe91 added the postprocessing Related to postprocessing module label Jan 6, 2026
@alejoe91
Copy link
Member

alejoe91 commented Jan 7, 2026

Depends on #4302

@alejoe91
Copy link
Member

alejoe91 commented Jan 9, 2026

@m-beau @samuelgarcia the parallelization makes it extremely fast. I slightly refactored the misc_metrics so I can directly call the low-level functions. Now computing good periods for a full 1 hour NP probe takes a minute or so :)

The cool trick is that we can compute all fp/fn values for all chunks (a chunk is unit+segment+period) and filter it all at once with thresholds. This simplifies the logic a lot! Take a look, for me this is ready to review.

I also tested the relative period mode and it works very well!

@m-beau I changed some defaults and naming to something that I think is a bit less strict, but not too much

@alejoe91 alejoe91 marked this pull request as ready for review January 9, 2026 11:00
HAVE_NUMBA = False


class ComputeGoodPeriodsPerUnit(AnalyzerExtension):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pretty bad for english but my first react is "good" could be replace by "valid" for a clearer semantic no ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was my suggestion too! I like it better as well :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was my suggestion too! I like it better as well :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggested this during the hackathon too, valid is clearer! :)

fn_threshold=fn_threshold,
minimum_n_spikes=minimum_n_spikes,
minimum_valid_period_duration=minimum_valid_period_duration,
user_defined_periods=user_defined_periods,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alejoe91
This could lead to a big json file no ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very good point! We still have to add tests for the external periods, but what we could do is set it as a self.user_defined_period and pop in from the dict in self params, so they are not set in the JSON but the run function can still use it.

def _merge_extension_data(
self, merge_unit_groups, new_unit_ids, new_sorting_analyzer, censor_ms=None, verbose=False, **job_kwargs
):
new_extension_data = self.data
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess for this we should decide the rules of merging valid period.
intersection or union or period pre merging ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or remove.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is still a TODO

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done in the last commit

Comment on lines 206 to 207
new_extension_data = self.data
return new_extension_data
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here what do we want for period after a split ?
maybe remove period of this units no ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here. Probably we want to recompute?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@samuelgarcia can you take a look?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After any merge or split, I think the whole thing needs to be recomputed, as merges could lead to new refractory period violations in specific periods, and splits to new clipped amplitude distributions in other periods?

subperiod["end_sample_index"] = end
subperiod["unit_index"] = unit_index
periods_for_unit[i] = subperiod
all_subperiods = np.concatenate((all_subperiods, periods_for_unit), axis=0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is better to make a final concatenate at the end of the loop and magaing this with list.
This avoid too much memory allocation

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@samuelgarcia
Copy link
Member

This looks cool.

Could we make a very simple matplotlib widgets for this new extension ? (in addtion to sigui f course)
having matplot is good for tutorial.

@alejoe91
Copy link
Member

alejoe91 commented Jan 9, 2026

Yes, I'll add some TODO's to the PR description so we keep track of it.

Comment on lines 349 to 350
# TODO: in case of periods, should we use total samples only in provided periods?
total_samples = sorting_analyzer.get_total_samples()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@m-beau @samuelgarcia @chrishalcrow this is a critical point. I refactor the computation so that we can pass the total samples externally. This is needed both when computing in chunks for the compute_good_periods function and also when we update the metrics to use the valid periods. In that case, we will need to count the valid samples for each units, so total_samples can be a dictionary now

"amplitude_scalings",
), "Invalid amplitude_extension. It can be either 'spike_amplitudes' or 'amplitude_scalings'"
sorting = sorting_analyzer.sorting
total_duration = sorting_analyzer.get_total_duration()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello - do we need to be careful about stuff like this? Here the total_duration of the entire recording. It's used later to compute a firing rate, and this firing rate is the total num spikes divided by total duration, rather than num-spike-in-periods/duration-of-periods.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case is the num spikes in period. So I think this is correct, but I'll double check carefully!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case is the num spikes in period. So I think this is correct, but I'll double check carefully!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

postprocessing Related to postprocessing module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants