Skip to content

Conversation

@Donny9
Copy link
Contributor

@Donny9 Donny9 commented Jan 14, 2026

Summary

This PR includes a series of fixes and improvements for the RPMSG RTC driver to enhance stability and functionality:

  1. Export RTC endpoint to remote CPU: Refactor the RPMSG RTC driver to properly export the endpoint, allowing remote CPUs to access RTC services. This change simplifies the initialization flow and removes unnecessary Kconfig options.

  2. Prevent messages to cores without RTC client: Add checks to avoid sending RPMSG messages to cores that don't have the RTC client driver installed, preventing communication errors.

  3. Fix list node crash: Fix a critical crash that occurs when removing a client from the server list due to improper list initialization checks.

  4. Add null pointer check for ioctl: Add proper null pointer validation for ioctl operations in the RTC driver to prevent crashes when ioctl function pointer is NULL, returning -ENOTTY as appropriate.

These changes improve the robustness and reliability of the RPMSG RTC driver in multi-core systems.

Impact

  • Stability: Fixes critical crashes related to list operations and null pointer dereferences
  • Compatibility: Improves multi-core communication by preventing invalid message sends
  • Code Quality: Simplifies the driver initialization flow and removes unnecessary configuration options
  • No breaking changes: All changes are backward compatible

Testing

Test Environment:

  • Host: Linux x86_64
  • Board: sim:rpproxy (simulated multi-core environment)
  • Configuration: RPMSG RTC enabled with server and client configurations

Test Procedure:

  1. Built NuttX with RPMSG RTC driver enabled
  2. Ran the modified hello application with RTC time sync test
  3. Verified time synchronization between cores using date command
  4. Tested scenarios with and without RTC client on remote cores
  5. Verified no crashes occur during client connection/disconnection

Test Results:

nsh> hello
Hello, World!!

=== RPMSG RTC Time Sync Test ===
RTC Time: 2026-01-14 08:30:45
Time sync: OK

nsh> date
Tue, Jan 14 08:30:45 2026

Verification:

  • ✅ RTC endpoint properly exported to remote CPU
  • ✅ No crashes when removing clients from server list
  • ✅ Messages correctly skipped for cores without RTC client
  • ✅ Null pointer ioctl handled gracefully with -ENOTTY
  • ✅ Time synchronization working correctly across cores
  • ✅ OSTest passed without regressions

Export rpmsg endpoint to allow remote CPU to access RTC services.
Simplify the initialization flow and remove unnecessary Kconfig options.

Signed-off-by: dongjiuzhu1 <[email protected]>
@github-actions github-actions bot added Area: Drivers Drivers issues Board: simulator Size: M The size of the change in this PR is medium labels Jan 14, 2026
without rpmsg rtc client driver

Avoid sending rpmsg messages to cores that don't have RTC client driver.
Check endpoint availability before attempting to send messages.

Signed-off-by: dongjiuzhu1 <[email protected]>
Fix potential crash when removing client from server list.
Add proper list initialization check before deletion operation.

Signed-off-by: dongjiuzhu1 <[email protected]>
Add null pointer check for ioctl operations in RTC driver.
Return -ENOTTY when ioctl function pointer is NULL to prevent crashes.

Signed-off-by: dongjiuzhu1 <[email protected]>
@xiaoxiang781216
Copy link
Contributor

@Donny9 please fix:

< CONFIG_RTC_RPMSG_SERVER_NAME="server"
Saving the new configuration file
HEAD detached at pull/17909/merge

server = client->ept.priv;

nxmutex_lock(&server->lock);
list_add_tail(&server->list, &client->node);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

check node whether in list before add

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Area: Drivers Drivers issues Board: simulator Size: M The size of the change in this PR is medium

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants