Skip to content

Commit 6cb5cfd

Browse files
authored
Merge pull request #51 from pepkit/dev
Dev to master
2 parents 02ade48 + c6901f7 commit 6cb5cfd

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

57 files changed

+2841
-5113
lines changed

README.md

Lines changed: 36 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -11,61 +11,74 @@ Documentation is written using [mkdocs](https://www.mkdocs.org/) and themed with
1111

1212
Each tool gets a `nav` section in `mkdocs.yml`, which maps to its own section/tab in the rendered documentation. So to add a new page, change titles, or change structure, edit `mkdocs.yml`. To edit the documentation itself, edit the `.md` documentation files in the subfolders under `/docs`.
1313

14-
### Prereqs
14+
### Prerequisites
1515

16+
```bash
17+
pip install mkdocs-material mkdocstrings[python] mkdocs-jupyter
1618
```
17-
pip install mkdocs-material
19+
20+
You'll also need to install the packages being documented (peppy, looper, pipestat, pypiper, geofetch, eido, yacman) for the API documentation to build correctly:
21+
22+
```bash
23+
pip install peppy looper pipestat pypiper geofetch eido yacman
1824
```
1925

2026

2127
### Building locally for development
2228

2329
I recommend previewing your changes locally before deploying. You can get a hot-reload server going by cloning this repository, and then just running:
2430

25-
```
31+
```bash
2632
mkdocs serve
2733
```
2834

2935
You can also use `mkdocs build` to build a portable local version of the docs.
3036

37+
The documentation now uses **mkdocstrings** for Python API documentation and **mkdocs-jupyter** for Jupyter notebooks. These plugins automatically generate documentation from the source code and render notebooks, so the build process is now a single step.
38+
3139

3240
### Publishing updates
3341

3442
The documentation is published automatically upon commits to `master` using a GitHub Action, which runs `mkdocs gh-deploy`. This builds the docs, and pushes them to the `gh-pages` branch. This branch is then published with GitHub Pages. There's no need to do this locally, just let the action deploy the updates for you automatically.
3543

3644
## FAQ
3745

46+
### Python API Documentation
3847

39-
### Updating automatic documentation
40-
41-
In the past, I had a plugin that would auto-document 2 things: 1. Python docs using lucidoc, and 2. Jupyter notebooks. This plugin was neat, but it caused me a lot of maintenance issues as well. So now, I've made it much simpler; now it's no longer a plugin, just a simple Python script. Update all the auto-generated docs (stored in `docs/autodoc_build`) by running the update script manually:
48+
Python API documentation is now automatically generated using **mkdocstrings** during the build process. No separate script is needed. The API docs are defined in markdown files (e.g., `docs/peppy/code/python-api.md`) using the `:::` syntax:
4249

43-
```console
44-
python autodoc.py
50+
```markdown
51+
::: peppy.Project
52+
options:
53+
docstring_style: google
54+
show_source: true
4555
```
4656

47-
#### Configuring lucidoc rendering
57+
This syntax tells mkdocstrings to extract and render the documentation for the specified class or function directly from the source code.
4858

49-
Auto-generated Python documentation with `lucidoc` rendering is configured in the `lucidoc` sections of `mkdocs.yml`.
59+
### Jupyter Notebooks
60+
61+
Jupyter notebooks are now rendered automatically using the **mkdocs-jupyter** plugin. Configure which notebooks to include in the `plugins` section of `mkdocs.yml`:
5062

5163
```yaml
52-
lucidoc:
53-
peppy: path/to/output.md
64+
plugins:
65+
- mkdocs-jupyter:
66+
include:
67+
- peppy/notebooks/*.ipynb
68+
- looper/notebooks/*.ipynb
5469
```
5570
56-
#### Configuring jupyter rendering
71+
Notebooks are rendered directly from `.ipynb` files during the build - no conversion step is needed.
5772

58-
Configure jupyter notebeeoks in the `jupyter` section, where you specify a list of `in` (for `.ipynb` files) and `out` (for `.md` files) locations.
73+
### CLI Usage Documentation
5974

60-
```yaml
61-
jupyter:
62-
- in: path/to/notebook_folder1
63-
out: path/to/rendered_folder1
64-
- in: path/to/notebook_folder2
65-
out: path/to/rendered_folder2
66-
```
67-
68-
There, you can specify which folders contain notebooks, and to where they should be rendered as markdown.
75+
CLI usage documentation for geofetch can be updated manually when needed using the helper script:
76+
77+
```bash
78+
python scripts/generate_cli_usage_docs.py
79+
```
80+
81+
This script reads the template at `docs/geofetch/usage-template.md.tpl` and runs `geofetch --help` to generate `docs/geofetch/code/usage.md`. This only needs to be run when the CLI interface changes.
6982

7083
### Can we version the docs?
7184

_typos.toml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,4 +4,6 @@ extend-exclude = ["*.ipynb", "*.svg"]
44
[default.extend-words]
55
opf = "opf"
66
PN="PN"
7-
Sur="Sur"
7+
Sur="Sur"
8+
certifi = "certifi"
9+
Tru = "Tru"

autodoc.py

Lines changed: 0 additions & 102 deletions
This file was deleted.

docs/eido/code/cli.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ eido validate peppro_paper.yaml -s http://schema.databio.org/pep/2.0.0.yaml -e
8080
Validation successful
8181

8282

83-
Any PEP should validate against that schema, which describes generic PEP format. We can go one step further and validate it against the PEPPRO schema, which describes Proseq projects specfically for this pipeline:
83+
Any PEP should validate against that schema, which describes generic PEP format. We can go one step further and validate it against the PEPPRO schema, which describes Proseq projects specifically for this pipeline:
8484

8585

8686
```bash
@@ -144,7 +144,7 @@ eido validate -h
144144

145145
Let's use `eido convert` command to convert PEPs to a variety of different formats. `eido` supports a plugin system, which can be used by other tool developers to create Python plugin functions that save PEPs in a desired format. Please refer to the documentation for more details. For now let's focus on a couple of plugins that are built-in in `eido`.
146146

147-
To see what plugins are currently avaialable in your Python environment call:
147+
To see what plugins are currently available in your Python environment call:
148148

149149

150150
```bash

docs/eido/code/demo.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ required:
129129
- samples
130130
```
131131

132-
PEPs to succesfully validate against this schema will need to fulfill all the generic PEP2.0.0 schema requirements _and_ fulfill the new `my_numeric_attribute` requirement.
132+
PEPs to successfully validate against this schema will need to fulfill all the generic PEP2.0.0 schema requirements _and_ fulfill the new `my_numeric_attribute` requirement.
133133

134134
### How importing works
135135

@@ -306,7 +306,7 @@ validate_project(project=p, schema="../tests/data/schemas/test_schema_invalid.ya
306306

307307
## Config validation
308308

309-
Similarily, the config part of the PEP can be validated; the function inputs remain the same
309+
Similarly, the config part of the PEP can be validated; the function inputs remain the same
310310

311311

312312
```python
@@ -326,7 +326,7 @@ validate_sample(
326326

327327
## Output details
328328

329-
As depicted above the error raised by the `jsonschema` package is very detailed. That's because the entire validated PEP is printed out for the user reference. Since it can get overwhelming in case of the multi sample PEPs each of the `eido` functions presented above privide a way to limit the output to just the general information indicating the unmet schema requirements
329+
As depicted above the error raised by the `jsonschema` package is very detailed. That's because the entire validated PEP is printed out for the user reference. Since it can get overwhelming in case of the multi sample PEPs each of the `eido` functions presented above provide a way to limit the output to just the general information indicating the unmet schema requirements
330330

331331

332332
```python

docs/eido/code/plugin-api-docs.md

Lines changed: 31 additions & 84 deletions
Original file line numberDiff line numberDiff line change
@@ -1,95 +1,42 @@
1-
<script>
2-
document.addEventListener('DOMContentLoaded', (event) => {
3-
document.querySelectorAll('h3 code').forEach((block) => {
4-
hljs.highlightBlock(block);
5-
});
6-
});
7-
</script>
1+
# Eido Built-in Filters API
82

9-
<style>
10-
h3 .content {
11-
padding-left: 22px;
12-
text-indent: -15px;
13-
}
14-
h3 .hljs .content {
15-
padding-left: 20px;
16-
margin-left: 0px;
17-
text-indent: -15px;
18-
martin-bottom: 0px;
19-
}
20-
h4 .content, table .content, p .content, li .content { margin-left: 30px; }
21-
h4 .content {
22-
font-style: italic;
23-
font-size: 1em;
24-
margin-bottom: 0px;
25-
}
3+
## Overview
264

27-
</style>
5+
Eido provides built-in filter functions that can transform PEP projects into different output formats. These filters are useful for converting PEPs to various representations like YAML, CSV, or other formats.
286

7+
### Available Filters
298

30-
# Package `eido` Documentation
9+
Eido includes several built-in filters for converting and exporting PEP data:
3110

11+
- **basic_pep_filter**: Returns the basic PEP representation
12+
- **yaml_pep_filter**: Converts PEP to YAML format
13+
- **csv_pep_filter**: Exports sample tables as CSV
14+
- **yaml_samples_pep_filter**: Exports only sample data as YAML
3215

33-
Project configuration
16+
## API Reference
3417

35-
```python
36-
def basic_pep_filter(p, **kwargs) -> Dict[str, str]
37-
```
18+
### Filter Functions
3819

39-
Basic PEP filter, that does not convert the Project object.
20+
::: eido.basic_pep_filter
21+
options:
22+
docstring_style: google
23+
show_source: true
24+
show_signature: true
4025

41-
This filter can save the PEP representation to file, if kwargs include `path`.
42-
#### Parameters:
26+
::: eido.yaml_pep_filter
27+
options:
28+
docstring_style: google
29+
show_source: true
30+
show_signature: true
4331

44-
- `p` (`peppy.Project`): a Project to run filter on
32+
::: eido.csv_pep_filter
33+
options:
34+
docstring_style: google
35+
show_source: true
36+
show_signature: true
4537

46-
47-
48-
49-
```python
50-
def yaml_pep_filter(p, **kwargs) -> Dict[str, str]
51-
```
52-
53-
YAML PEP filter, that returns Project object representation.
54-
55-
This filter can save the YAML to file, if kwargs include `path`.
56-
#### Parameters:
57-
58-
- `p` (`peppy.Project`): a Project to run filter on
59-
60-
61-
62-
63-
```python
64-
def csv_pep_filter(p, **kwargs) -> Dict[str, str]
65-
```
66-
67-
CSV PEP filter, that returns Sample object representations
68-
69-
This filter can save the CSVs to files, if kwargs include
70-
`sample_table_path` and/or `subsample_table_path`.
71-
#### Parameters:
72-
73-
- `p` (`peppy.Project`): a Project to run filter on
74-
75-
76-
77-
78-
```python
79-
def yaml_samples_pep_filter(p, **kwargs) -> Dict[str, str]
80-
```
81-
82-
YAML samples PEP filter, that returns only Sample object representations.
83-
84-
This filter can save the YAML to file, if kwargs include `path`.
85-
#### Parameters:
86-
87-
- `p` (`peppy.Project`): a Project to run filter on
88-
89-
90-
91-
92-
93-
94-
95-
*Version Information: `eido` v0.2.2, generated by `lucidoc` v0.4.4*
38+
::: eido.yaml_samples_pep_filter
39+
options:
40+
docstring_style: google
41+
show_source: true
42+
show_signature: true

0 commit comments

Comments
 (0)