Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ prerequisites:
- Understanding of **container runtimes** (containerd) and **CNI
networking**.
- Basic knowledge of **communication protocols** (MQTT, HTTP, etc.).
- (Optional but helpful) dFamiliarity with **edge-cloud
- (Optional but helpful) Familiarity with **edge-cloud
architectures** and **data-flow orchestration**.

author: Tinkerblox
Expand Down
124 changes: 84 additions & 40 deletions content/learning-paths/automotive/tinkerblox_ultraedge/background.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,50 +6,94 @@ weight: 2
layout: "learningpathall"
---

{{% notice Note %}}
REMOVE ME: Need to review content for Intro/background...
{{% /notice %}}

### Overview

UltraEdge was built with the vision of orchestrating the edge-native
execution fabric for high-performance compute infrastructure

- UltraEdge is a ‘built-for-edge’ adaptive **AI & Mixed Workloads**
execution stack built on the ethos of high performance, high
fungibility & ultra-low footprint
- Developed through strategic alliances with world-renowned technology
powerhouses
- Clear dual focus on Mixed workloads and new-age AI workloads
- Full stack enablement through MicroStack & NeuroStack systems
- Curated for AI@Edge with preferred edge deployment approach by Edge
AI Foundation
- Managed cluster” orchestration through integration with Kube-stack
and/or Slurm
- Observability for control plane, diagnostics & telemetry
- Demonstrable value to customer through lower TCO of CPU-GPU clusters
### Introduction

UltraEdge is an edge-native, high-performance execution fabric designed to run AI and mixed workloads without the overhead of traditional container platforms. While technologies like Docker and Kubernetes were created for general-purpose cloud environments, they introduce latency, resource bloat, and non-deterministic behavior that are poorly suited for edge deployments.

UltraEdge takes a fundamentally different approach. It replaces heavyweight container runtimes with a lean, deterministic execution stack purpose-built for performance-oriented compute. This enables millisecond-level startup times, predictable performance, and a dramatically smaller resource footprint—allowing workloads to start faster, run closer to the hardware, and make full use of available CPU and GPU resources.

At the core of UltraEdge are two specialized execution systems:

· **MicroStack**, optimized for enterprise and mixed workloads

· **NeuroStack**, purpose-built for AI inference and accelerated compute

Together, these systems deliver up to **30× faster startup times** and **3.8× smaller package sizes** compared to conventional container-based approaches. By removing unnecessary abstraction layers, UltraEdge ensures compute cycles are spent on execution—not on managing the runtime itself.

This learning path introduces the architecture, principles, and components that make UltraEdge a high-performance execution fabric for modern edge infrastructure.

### Ultraedge Overview

UltraEdge was built with the vision of **orchestrating the edge-native execution fabric for high-performance compute infrastructure**

Key design principles and capabilities include:

· **Built-for-edge execution stack**

A lightweight, adaptive platform for **AI and mixed workloads** optimized for low latency, high determinism, and minimal footprint.

· **Dual workload focus**

Native support for both traditional enterprise workloads and next-generation AI workloads, without compromising performance.

· **Full-stack enablement**

Delivered through the **MicroStack** and **NeuroStack** execution systems, each optimized for its workload domain.

· **High fungibility and efficiency**

Maximizes utilization of CPU and GPU resources while reducing operational and infrastructure overhead.

· **Ecosystem-aligned development**

Developed through strategic alliances with leading technology partners and curated for **AI@Edge**, including alignment with Edge AI Foundation deployment approaches.

· **Cluster-aware orchestration**

Integrates with Kubernetes-based stacks and Slurm for managed cluster orchestration.

· **Built-in observability**

Provides control-plane visibility, diagnostics, and telemetry for operational insight.

· **Lower total cost of ownership (TCO)**

Demonstrable reduction in CPU/GPU cluster costs through faster startup, higher utilization, and reduced runtime overhead.

### UltraEdge High-Level Architecture

{{% notice Note %}}
REMOVE ME: It would be good to put a high-level picture of the architecture here. Then text below can detail the high points.
{{% /notice %}}
UltraEdge is composed of layered systems, each responsible for a distinct aspect of execution and orchestration:

![High-level Architecture diagram](https://raw.githubusercontent.com/Tinkerbloxsupport/arm-learning-path-support/main/static/images/High-level%20architecture%20diagram.png)

---

#### 1. UltraEdge Core Layer
*Manages the foundational execution fabric, including:*
* Compute infrastructure management
* Service orchestration and lifecycle management
* Rule-engine orchestration
* Data-flow management across workloads

#### 2. UltraEdge Boost Layer
*Provides performance-critical acceleration, including:*
* Low-level optimized routines
* FFI (Foreign Function Interface) integrations
* Dynamic connectors and southbound protocol adapters

**UltraEdge ‘Core’ Layer
**
Handles compute infrastructure management including service
orchestration, lifecycle management, rule engine orchestration, and
data-flow management .
#### 3. UltraEdge Prime Layer
*Implements workload intelligence and orchestration logic, including:*
* Business logic execution
* Trigger and activation sequences
* AI and mixed workload coordination

**UltraEdge ‘Boost’ Layer
**
Implements performance-critical routines and FFI (Foreign Function
Interface) calls; Contains dynamic connectors, and southbound protocol
adapters
#### 4. UltraEdge Dock
*Provides workload and cluster orchestration through:*
* Kubernetes-based stacks
* Slurm-based scheduling environments

**UltraEdge ‘Prime’ Layer
**
Contains business logic, trigger & activation sequences, and AI & mixed
workload orchestration .
#### 5. UltraEdge Edge-Cloud Connect Layer
*Enables data movement and observability, including:*
* Data streaming to databases (e.g., InfluxDB, SQLite)
* Diagnostics, logging, and telemetry outputs

**UltraEdge Edge-Cloud ‘Connect’ Layer
**
Supports data streaming to databases (InfluxDB, SQLite) and provides
diagnostic/logging outputs . **UltraEdge Dock** Supports workload orchestration
management through kube-stack or slurm.
Original file line number Diff line number Diff line change
@@ -1,73 +1,101 @@
---
title: DEBIAN Installation - UltraEdge
title: Debian/Ubuntu installation - UltraEdge

weight: 4
weight: 5

layout: "learningpathall"
---

#### Installation Process
### Installation Process for UltraEdge on Ubuntu/Debian

Follow these steps to initialize and register your device within the **Uncloud** ecosystem:

{{% notice Note %}}
REMOVE ME: Need link information to "Uncloud" below...
{{% /notice %}}
1. **Access the Platform:**

- Copy device installation details from **Uncloud**.
- Device Initialization
Navigate to the [Uncloud Dashboard](https://dev.tinkerblox.io/) and log in with your credentials.

1. Copy the command below into the clipboard.
2. Open terminal on your device.
3. Paste the copied command into terminal to initialize the device.
2. **Provision a New Device:**
* Go to **Device Management** > **New Device**.

![Device Management](https://raw.githubusercontent.com/Tinkerbloxsupport/arm-learning-path-support/main/static/images/Device_managment.png)

{{% notice Note %}}
REMOVE ME: Not sure what "example code" means below... is this what the user needs to execute or is it just an example?
{{% /notice %}}
![Creating New Device](https://raw.githubusercontent.com/Tinkerbloxsupport/arm-learning-path-support/main/static/images/creating_new_device.png)

Just an example code. You will find the exact to execute for your device in unclound
```bash
sudo apt update && sudo apt install curl && sudo apt install jq -y && sudo DEVICE_ID="5b3ff290-0c88-4cd9-8ef7-08de0bded9df" KEY="TB.ApiKey-mlBZgDFc7qyM6ztPjILBCbFEqnVlbvjUpM1Q1IqNP6tA7wNdi97AQ==" sh -c "$(curl "https://tinkerbloxdev.blob.core.windows.net:443/tinkerbloxdev/binaries/installer.sh?sv=2025-01-05&st=2025-11-03T06%3A31%3A55Z&se=2025-11-03T06%3A56%3A55Z&sr=b&sp=r&sig=HNS70HgJyHlhCVQrqvpGdCcaf8%2FtVjdW4RNiiiIPCSUA%3D")"
```
* Click the **three dots (options menu)** next to your device entry and select **Initialize**.

- Paste the copied content in the target terminal and execute.
![Initialize Device](https://raw.githubusercontent.com/Tinkerbloxsupport/arm-learning-path-support/main/static/images/Initialize%20.png)

#### Activation of Agent
3. **Install some prerequsiites:**

* First install some prerequisites:

```bash
sudo apt update
sudo apt install -y curl jq
```


4. **Retrieve Installation Command Details:**

* Copy the generated device installation command or details from the **Uncloud** portal to your clipboard.

![Installation Command](https://raw.githubusercontent.com/Tinkerbloxsupport/arm-learning-path-support/main/static/images/Initialize%20_command.png)

You should be able to locate and copy the specific installation command appropriate for your account. Here is an example:

```bash
sudo DEVICE_ID="5b3ff290-0c88-4cd9-8ef7-08de0bded9df" KEY="TB.ApiKey-mlBZgDFc7qyM6ztPjILBCbFEqnVlbvjUpM1Q1IqNP6tA7wNdi97AQ==" sh -c "$(curl "https://tinkerbloxdev.blob.core.windows.net:443/tinkerbloxdev/binaries/installer.sh?sv=2025-01-05&st=2025-11-03T06%3A31%3A55Z&se=2025-11-03T06%3A56%3A55Z&sr=b&sp=r&sig=HNS70HgJyHlhCVQrqvpGdCcaf8%2FtVjdW4RNiiiIPCSUA%3D")"
```

Run your specific installer command in your Ubuntu/Debian SSH shell and initialize the agent to install UltraEdge.

### Activation of the UltraEdge Agent

On the first boot, the agent will automatically generate a file named
`activation_key.json` at the path:
`activation_key.json` at the path:

/opt/tinkerblox/activation_key.json
`/opt/tinkerblox/activation_key.json`

Share this `activation_key.json` file with the TinkerBlox team to
receive license key (which includes license metadata).

1. Stop the agent using the following command:

sudo systemctl stop ultraedge.service
```bash
sudo systemctl stop tbx-agent.service
```

2. Replace the existing `activation_key.json` file in
`/opt/tinkerblox/` with the licensed one provided by TinkerBlox.

3. Start the agent:

sudo systemctl start ultraedge.service
```bash
sudo systemctl start tbx-agent.service
```

#### Manual Running
### Manual Running

- Binary path: `/usr/bin/EdgeBloXagent`
- Binary path: `/bin/tbx-agent`

- To start:

EdgeBloXagent
```bash
tbx-agent
```

- To stop, press <span class="kbd">Ctrl</span> +
<span class="kbd">C</span> once.

## MicroPac Installation

{{% notice Note %}}
REMOVE ME: Is MicroPac only for Debian installations? Not for YOCTO ones?
{{% /notice %}}
MicroPac is the core tooling used to build and manage **MicroStack** (general microservices) and **NeuroStack** (AI-native services).

* **Platform Agnostic:** MicroPac is not restricted to a specific operating system; it is fully compatible with both **Debian** and **Yocto** environments, providing a consistent execution layer across different Linux distributions.

* **Build System:** To create a service, the system utilizes a **MicroPacFile** (the declarative configuration) and the **MicroPac Builder** (the high-performance packaging engine).

* **Validation:** The ecosystem includes a **MicroPac Validator**, which verifies the integrity and security of the package created by the builder to ensure it is ready for edge deployment.

#### System Requirements

Expand All @@ -78,34 +106,57 @@ REMOVE ME: Is MicroPac only for Debian installations? Not for YOCTO ones?

#### Required Packages

```bash
sudo apt-get update
sudo apt-get install -y tar curl qemu-user-static binfmt-support
```

### Cross-Architecture Support

{{% notice Note %}}
REMOVE ME: Might need a bit more detail on why this needs to be executed (below):
{{% /notice %}}
The **MicroPacFile** is the central declarative configuration used by the builder to define the environment and behavior of your service. This configuration is essential for orchestrating both **MicroStack** (general microservices) and **NeuroStack** (AI/ML) services.

* **Multi-Language Support:** You can configure MicroPacFiles for applications written in **Python, C, and C++**, making it highly versatile for both high-level AI workloads and low-level embedded system tasks.

* **Unified Workloads:** It bridges the gap between complex ML models and resource-constrained embedded software, ensuring consistent execution across diverse hardware.


To build MicroPac for different architectures:
# Enable binfmt for armv7

```bash
sudo update-binfmts --enable qemu-armv7
```

### Installation

- The package is provided as a `.deb` file.

- Install it on your host machine:

```bash
sudo apt install ./<package_name>.deb
```

### MicroPac File Schema file creation/setup

{{% notice Note %}}
REMOVE ME: Need more information on how to setup your project directory/where its located
{{% /notice %}}
#### File Placement
For the MicroPac Builder to function correctly, the **MicroPacFile** must be placed in the root directory alongside your source code and dependency files.

Place a `MicroPacFile` in your project directory.

**Example Directory Structure (video_cv Project):**

```text
video_cv/
├── hooks/ # Lifecycle scripts
├── models/ # ML model weights
├── static/ # Static assets (CSS/JS)
├── templates/ # HTML templates
├── app.py # Main application entry
├── MicroPacFile # REQUIRED: Configuration file
└── requirements.txt # Python dependencies
```

Place a `MicroPacFile` in your project directory as mentioned in above example.

```console
name: nginx
Expand Down Expand Up @@ -171,10 +222,19 @@ Place a `MicroPacFile` in your project directory.

Navigate to your project directory and execute:

```bash
sudo micropac-builder build
```

This generates a file named `<project_name>.mpac`.

{{% notice Note %}}
REMOVE ME: Is there a way to confirm that Micropac is properly setup now?
{{% /notice %}}
### Verifying the Micropac Setup

To confirm that the Micropac has been generated properly, follow these steps:

1. **Locate the Package:** Find the generated file with the `.mpac` extension.
2. **Extract the Contents:** Extract the `.mpac` file using a standard extraction tool (or rename it to `.zip`/`.tar.gz` if necessary to open it).
3. **Verify Contents:** The extracted folder must contain exactly three files:
* **`manifest.yaml`**: Contains the metadata and configuration for the package.
* **RootFS tarball**: The base file system layer (in `.tar` format).
* **Application layer tarball**: The specific application logic/binaries (in `.tar` format).
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading