Dual LLM Processing (Gamma + OpenAI)
Incremental AI batching
Insight generation
Intelligent Innovation Powered by Dual LLM Architecture
IdeaScale AI is an advanced add-on to the IdeaScale innovation management platform, designed to help administrators and moderators extract deeper insights from large volumes of ideas. By leveraging two large language models: Gamma and OpenAI, IdeaScale AI transforms raw idea submissions into actionable intelligence. From thematic clustering to sentiment analysis and idea comparison, the tool enables data-driven decision-making at scale.
Months
1 Front-end Developer, 1 Back-end Developer, 1 Project Manager
IdeaScale AI extends the core IdeaScale platform by embedding artificial intelligence directly into the innovation lifecycle. Administrators and moderators can now analyze idea submissions more efficiently, identify trends, generate insights, and make informed strategic decisions based on AI-processed data.
Building on our established partnership with IdeaScale, including the Microsoft Teams Integration project, we're happy to take on this exciting project as well. While the Teams app focused on collaboration and accessibility within enterprise environments, IdeaScale AI focuses on intelligent analysis, processing high volumes of idea data through advanced LLM-powered workflows.
If you’d like to explore how we previously extended IdeaScale into Microsoft Teams, read the full IdeaScale Teams Integration case study.
We were responsible for implementing the AI processing logic, infrastructure between models, frontend visualization layers, and deployment strategies, including government-compliant versions.
The most significant technical challenge stemmed from the volume and size of idea submissions. Many workspaces contained extremely large datasets of ideas, which, when processed simultaneously through LLMs, resulted in:
🔹 Memory overload risks
🔹 API limitations and token constraints
🔹 Processing instability
🔹 Increased error rates during large-batch analysisBackend – Node.js / Nest.js to efficiently manage AI workflows, API communication, and implement incremental batching logic .
Sending hundreds of ideas at once to language models was not sustainable and frequently caused processing failures.
Another layer of complexity was regulatory compliance. We needed to deploy both AI functionality and Teams-related integrations separately for government and non-government users. Government deployments required stricter compliance standards, infrastructure separation, and enhanced validation processes. This led to the creation of a dedicated environment (ideascale.gov) to ensure regulatory alignment and avoid conflicts with commercial deployments.
To address large-scale idea processing challenges, we implemented an incremental AI batching strategy. This is how it looked: instead of processing 100+ ideas at once, we split it into smaller phases; for example 30 ideas first. The results were stored and preserved. Then we incrementally increased the batch size by 5 ideas at a time. Each new batch was analyzed in context of previously processed results. Then based on recent results we performed several actions such as generating word clouds, “For” and “Against” idea comparisons, analytical summaries.
This cascading model allowed us to maintain system stability while preserving contextual continuity across datasets. By chaining AI outputs together intelligently, we avoided overload while maintaining analytical depth.
For compliance requirements, we:
🔹 Deployed separate environments for government and commercial users
🔹 Built ideascale.gov as a distinct deployment
🔹 Conducted multiple review calls with government representatives
🔹 Iteratively resolved regulatory and performance-related feedback
This ensured both AI functionality and enterprise governance standards were met.
The project was completed in 4–5 months by a team consisting of 1 Front-End Developer, 1 Back-End Developer, 1 Project Manager
Development followed an agile methodology with structured sprints, task prioritization, and iterative releases. The Project Manager played a key role in coordinating communication between IdeaScale stakeholders and the internal development team. Responsibilities included sprint planning, backlog management, milestone tracking, compliance coordination.
The backend developer focused on building the dual-LLM workflow between Gamma and OpenAI, implementing batching logic, optimizing request flows, and ensuring stable processing. The frontend developer worked on visualization layers, AI output presentation, and integration into the existing IdeaScale interface.
Continuous testing cycles were conducted to validate performance under varying dataset sizes and ensure reliable incremental processing.
🔹 Frontend – React.js / Vite for fast development and optimized build performance.
🔹 Backend – Node.js / Nest.js to efficiently manage AI workflows, API communication, and implement incremental batching logic.
🔹 Database – PostgreSQL as the relational database to store structured idea data, AI outputs, metadata, and system logs.
🔹 Infrastructure – Kubernetes (AWS EKS): the system was deployed on Kubernetes using AWS EKS, enabling containerized deployment and high availability.
🔹 PGVector – implemented for vector storage and similarity search.
🔹 AWS Bedrock – used to access foundation models in a secure, scalable cloud environment.
🔹 OpenAI LLM Models – leveraged for advanced natural language understanding, summarization, comparative analysis.
🔹 Llama3 (Self-Embedded) – integrated as a self-hosted model for additional processing flexibility and deployment control.
🔹 RAG (Retrieval-Augmented Generation) – implemented to enhance AI output accuracy by combining stored contextual data with generative model responses.
🔹 Dual LLM Processing (Gamma + OpenAI) for advanced idea analysis
🔹 Incremental AI batching for stable large-scale data handling
🔹 Insight generation including summaries, word clouds, and comparative analysis
IdeaScale AI has been successfully launched to production and is now actively used by administrators and moderators to gain deeper insight into idea submissions.
Key outcomes include:
🔹 Stable processing of large idea datasets
🔹 Reduced AI-related system errors
🔹 Regulatory-compliant government deployment
🔹 Enhanced analytical capabilities within the IdeaScale ecosystem
Contact us today and see how we’ve built and deployed dual-LLM architectures, handled large-scale dataset processing, and navigated government compliance environments. Let’s transform your product with intelligent, production-ready AI solutions.
The client was pleased with the resulting product — it was stable, bug-free, and connected smoothly with their core app. BeeWeb fostered a smooth process through timely execution and clear communication, and their adherence to delivering high-quality outputs stood out.
— Nick Jain / CEO, IdeaScale / United States
⭐⭐⭐⭐⭐ 5.0 rating on Clutch
👉 Read the full review on Clutch
Describe your project by providing a written description or recording a voice message.