The ThreadPilot Daily Digest system is an AI-powered team communication analysis and digest generation platform. The system automatically processes Slack conversations from multiple team channels, extracts key insights using specialized AI agents, and generates personalized daily digests that help teams stay informed about cross-functional updates, blockers, and decisions.
- System: The ThreadPilot Daily Digest system
- TeamAnalyzer: AI agent that extracts updates, blockers, and decisions from team messages
- DependencyLinker: AI agent that detects cross-team dependencies and coordination needs
- DigestOrchestrator: Main pipeline component that coordinates all processing steps
- MessageAggregator: Component that fetches and filters messages from Slack channels
- PersonalizationEngine: Component that ranks content based on user personas and preferences
- FeedbackSystem: Component that learns from user reactions to improve future digests
- MemoryStore: Persistent storage for blockers, decisions, and learning data
- MockClient: In-process Slack client simulator for development and testing
- StructuredEvent: Standardized data structure for extracted insights (decisions, blockers, updates)
- CrossTeamAlert: Notification about dependencies between teams requiring attention
- DigestDistributor: Component that posts digests to appropriate Slack channels and users
User Story: As a team lead, I want the system to automatically collect messages from all relevant team channels, so that I can get a comprehensive view of cross-team activity without manually checking multiple channels.
- WHEN the system runs, THE MessageAggregator SHALL fetch messages from all configured team channels (mechanical, electrical, software, product, QA)
- WHEN fetching messages, THE MessageAggregator SHALL filter out noise and focus on substantive conversations
- WHEN processing messages, THE System SHALL handle both real Slack API integration and mock data for development
- WHEN no new messages exist in a channel, THE System SHALL create an empty analysis indicating no activity
- WHEN the system encounters API rate limits, THE System SHALL implement appropriate delays and retry logic
User Story: As a project manager, I want the system to automatically identify key updates, blockers, and decisions from team conversations, so that I can quickly understand what's happening across teams without reading every message.
- WHEN analyzing team messages, THE TeamAnalyzer SHALL extract status updates with author and category information
- WHEN analyzing team messages, THE TeamAnalyzer SHALL identify blockers with severity, owner, and status details
- WHEN analyzing team messages, THE TeamAnalyzer SHALL detect decisions with context and impact information
- WHEN analyzing team messages, THE TeamAnalyzer SHALL generate action items with owners and priorities
- WHEN analyzing team messages, THE TeamAnalyzer SHALL provide a summary of team activity and overall tone assessment
- WHEN processing completes, THE System SHALL convert all extracted insights into standardized StructuredEvent objects
User Story: As an engineering manager, I want the system to automatically detect when teams are waiting on each other or have coordination needs, so that I can proactively address blockers and improve team collaboration.
- WHEN analyzing events from multiple teams, THE DependencyLinker SHALL identify teams waiting on other teams
- WHEN analyzing events from multiple teams, THE DependencyLinker SHALL detect interface changes that affect downstream teams
- WHEN analyzing events from multiple teams, THE DependencyLinker SHALL identify timeline changes that impact dependent work
- WHEN analyzing events from multiple teams, THE DependencyLinker SHALL detect shared resource conflicts between teams
- WHEN dependencies are found, THE System SHALL create CrossTeamAlert objects with recommended actions and suggested owners
- WHEN dependencies are detected, THE System SHALL generate cross-team highlights for leadership visibility
User Story: As a system user, I want the digest quality to improve over time based on my reactions and feedback, so that the system becomes more accurate and relevant to my needs.
- WHEN users react to digest items with emojis, THE FeedbackSystem SHALL capture and store the feedback with item associations
- WHEN processing feedback, THE System SHALL map emoji reactions to feedback types (accurate, wrong, missing context, irrelevant)
- WHEN generating new digests, THE System SHALL apply confidence adjustments based on historical feedback patterns
- WHEN feedback indicates consistent issues, THE System SHALL generate prompt directive patches to improve future analysis
- WHEN users provide the same feedback multiple times, THE System SHALL prevent duplicate feedback storage
- WHEN feedback processing fails, THE System SHALL continue digest generation without blocking the main pipeline
User Story: As a team member with a specific role and team affiliation, I want the digest content to be prioritized based on what's most relevant to my responsibilities, so that I can focus on the information that matters most to me.
- WHEN generating personalized digests, THE PersonalizationEngine SHALL apply role-based content boosting (Lead, IC, PM, Executive)
- WHEN generating personalized digests, THE PersonalizationEngine SHALL apply team-based topic filtering based on domain expertise
- WHEN ranking content, THE System SHALL boost cross-team items for leadership roles and reduce them for individual contributors
- WHEN users have custom preferences, THE System SHALL merge role and team personas with user-specific overrides
- WHEN determining content relevance, THE System SHALL match content against persona-specific topics of interest
- WHEN filtering content, THE System SHALL apply minimum severity thresholds based on user persona
User Story: As a team member, I want to receive digest information through appropriate channels based on my role and the content importance, so that I get the right information at the right level of detail.
- WHEN distributing digests, THE DigestDistributor SHALL post main digest summaries to the designated digest channel
- WHEN distributing digests, THE DigestDistributor SHALL create threaded replies with detailed team-specific information
- WHEN high-priority cross-team alerts exist, THE DigestDistributor SHALL send direct messages to leadership users
- WHEN posting to Slack, THE System SHALL format content using appropriate Slack blocks and formatting
- WHEN distribution fails, THE System SHALL log errors and continue with remaining distribution targets
- WHEN running in preview mode, THE System SHALL generate formatted output without posting to Slack
User Story: As a system operator, I want the system to remember previous decisions and blockers across runs, so that it can track resolution status and avoid duplicate reporting.
- WHEN processing events, THE MemoryStore SHALL persist decisions to prevent duplicate reporting
- WHEN processing events, THE MemoryStore SHALL persist blockers and track their resolution status over time
- WHEN running the digest pipeline, THE System SHALL maintain state about the last successful run timestamp
- WHEN starting a new run, THE System SHALL only process messages newer than the last successful run
- WHEN a run completes successfully, THE System SHALL update the state with the new timestamp and processed channel information
- WHEN running in mock mode, THE System SHALL skip state persistence to avoid interfering with development
User Story: As a developer, I want to test the system with realistic data without requiring live Slack integration, so that I can develop and validate features efficiently.
- WHEN running in mock mode, THE System SHALL use the MockClient instead of real Slack API calls
- WHEN generating synthetic data, THE System SHALL create realistic multi-day conversations with cross-team dependencies
- WHEN generating synthetic data, THE System SHALL include diverse personas across mechanical, electrical, software, product, and QA teams
- WHEN generating synthetic data, THE System SHALL create story arcs with blockers, decisions, and resolution patterns
- WHEN running tests, THE System SHALL support both unit tests for individual components and integration tests for the full pipeline
- WHEN in preview mode, THE System SHALL generate complete digest output without posting to any external systems
User Story: As a system administrator, I want to configure channel mappings, user lists, and processing parameters through environment variables, so that I can deploy the system across different environments without code changes.
- WHEN starting the system, THE System SHALL load team channel mappings from environment variables
- WHEN starting the system, THE System SHALL load leadership user lists from environment configuration
- WHEN starting the system, THE System SHALL load processing parameters like lookback hours and summary length limits
- WHEN starting the system, THE System SHALL load AI model configuration including model name and temperature settings
- WHEN environment variables are missing, THE System SHALL use sensible defaults and continue operation
- WHEN running in different environments, THE System SHALL support both development and production configurations
User Story: As a system operator, I want to monitor the system's performance and track key metrics, so that I can ensure reliable operation and identify areas for improvement.
- WHEN processing messages, THE System SHALL log the number of messages processed per channel
- WHEN running AI agents, THE System SHALL track processing time and success rates for each agent
- WHEN extracting events, THE System SHALL count and log the number of each event type extracted
- WHEN distribution completes, THE System SHALL log success and failure counts for each distribution target
- WHEN errors occur, THE System SHALL log detailed error information with context for debugging
- WHEN the pipeline completes, THE System SHALL provide a summary of all processing metrics and outcomes